Foreign language visemes for use in lip-synching with computer-generated audio
John Whipple, Ruth Agada, Jie Yan

Abstract
There are several state-of-art for animating human motion, most of which involves the use of markers on the human and a tracker that estimates movement based on the position and orientation of these markers. In this paper, we discuss the different methods in which to extract human lip movement from video and map to the corresponding viseme of a foreign language for smooth animation of a 3D model. We discuss the use of Active shape model for obtaining lip movements, the use of established grapheme to phoneme methods and the commonality with the English phonemes, and how these are transferred onto a 3D human model

Full Text: PDF     DOI: 10.15640/jcsit.v5n2a1