Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 27, 2015
These faces consist of a set of blendshapes that were generated by capturing scans via a depth sensor (Occipital Structure Sensor) and then processed through our near-automatic pipeline which generates correspondences with both geometry and textures to minimize the texture drift. The animation is generated through realtime puppeteering. Result is rendered and controlled through SmartBody (http://smartbody.ict.usc.edu)
Note that our process retains as much of the original scan as possible, resulting in a photoreal appearance.
Project team from USC Institute for Creative Technologies: Dan Casas, Andrew Feng, Evan Suma, Oleg Alexander, Graham Fyffe, Paul Debevec, Ari Shapiro