Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Jan 5, 2012
Automated lip synchronization and gesturing. The audio track is analyzed to determine the utterance. The utterance is transformed into phonemes and visemes, then head movement and gesturing are automatically added based on the syntax and semantics of the utterance. Note that the only input is the audio track - the motion was added automatically. Learn more about SmartBody at http://projects.ict.usc.edu/smartbody/. Learn more about the USC Institute for Creative Technologies at http://ict.usc.edu/.