Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Mar 28, 2014
Web UIs are getting better at detecting and optimising for touch, but it continues to be a struggle, with much lower level primitives to work with than in the native world. Should we be aiming to abstract all spacial interaction into a 'pointer'? How can more complex spacial interactions like gestures and 3D motion be handled without extraordinary amounts of effort?