Loading...

Invoked Computing: Spatial audio and video AR invoked through miming

1,858 views

Loading...

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 16, 2011

Developed by Alvaro Cassinelli and Alexis Zerroug at the University of Tokyo, Ishikawa Komuro Lab (Laval 2011 Jury Grand Price).

The aim of the "invoked computing" project is to develop a multi-modal AR system able to turn everyday objects into computer interfaces or communication devices on the spot. To "invoke" an application, the user needs just to mimic a specific interaction scenario; miming will prompt the ubiquitous computing environment to "condense" on the real object, by supplementing it with artificial affordances instantiated thanks to common AR techniques. An example: taking a banana and bringing it closer to the ear. The gesture is clear enough: directional microphones and parametric speakers hidden in the room would make the banana function as a real handset on the spot. Another example: to invoke a laptop computer, the user could take a pizza box, open it and "tape" on its surface).

We are interested here on developing a multi-modal AR system able to augment objects with video as well as *sound* (in addition to the usual camera and projector pair, we are using parametric speakers. The parametric speakers project a low divergence ultrasound beam that becomes audible (demodulated) when touching a real object. .

Hardware: mac mini, led projector, pointgrey camera, IR source, parametric speakers, arduino and servomotors

Software: OpenFrameworks, Artoolkit, opencv, arduino

For more:
http://www.k2.t.u-tokyo.ac.jp/percept...

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...