Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Feb 4, 2014
We have developed the capability of creating a digital model of a person using commodity hardware (a Microsoft Kinect camera), placing this 3D avatar into a video game or simulation and infusing it with numerous behaviors and capabilities in just a few minutes. With over 20 million Microsoft Kinect cameras currently deployed around the world, this technology enables millions of people to create a personalized avatar for no cost in a short amount of time. By changing the economics of 3D avatar creation and simulation, new socially-based virtual interactions can be developed that rely on recognizable representations of individual people. Avatars can be captured and recaptured on a daily basis, reflecting what they wear, their hairstyle, and any other changes in their appearance.
The system uses capture technology from USC which registers 4 static poses together and constructs a 3D representation of the subject. The model is then placed into the SmartBody (http://smartbody.ict.usc.edu) which automatically adds a skeleton, creates a deformable character using automatic rigging, and infuses the character with various capabilities such as walking, jumping, gazing and so forth.
This project is a collaboration between the USC ICT's Character Animation and Simulation Group from Ari Shapiro and Andrew Feng, USC ICT's Mixed Reality Lab with Evan Suma, and the USC Institute for Robotics and Intelligence Systems with Wang Ruizhe. Also, contributions from Gerard Medioni and Hao Li.