Watch Queue
Queue
Watch QueueQueue
The next video is starting
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Virtual Reality is amazing, but to really push the industry forward we need to make it useful for us. That means addressing the most frequent, least sexy tasks that we use computers for: Word processing, spreadsheets, basic informational browsing. Full notes below!
----------------------------------------------------------------
NOTES:
----------------------------------------------------------------
* 0:00 The video aspect ratio comes from the direct capture of a single eye for the HTC Vive. Rather than showing both eyes or letterboxing the video, it's better to go with the footage you have.
----------------------------------------------------------------
* 1:12 I say we can do anything - that's not totally true. It's going to take a long time for VR to get back to the perceptual pixel density that we currently enjoy on modern computer screens.
- * But what we lose in density we more than make up for in space. We can see and interact with the full 360º around us, and positional tracking allows us to see it with even greater nuance.
- * Another significant shortcoming of current-generation hardware is the long, fixed focal distance. Most current-gen VR displays have a focal distance of a few feet, so attempting to view things any closer than that leads to blurred vision and eye strain. This is especially problematic if you're trying to interact with objects with your hands, because this entire region is too close to see properly!
----------------------------------------------------------------
* 1:17 The field of view of an HTC vive is wider than you typically expect on a computer screen, so things appear much larger when you're in the world. Combined with the depth cues of seeing objects at different distances, the final result in VR is surprisingly tangible.
- * The lack of buttons and persistent textual labels is intentional. We can only read through the central 5-10º of our vision, so persistent text spread around your field of view is essentially clutter.
- * The dominance of text and labels on our activity emerged really recently. If you think about a woodworking shop, the hammer and drill don't need labels telling you what they are - they signify themselves. You don't click a 'start hammer' button, you just swing it at the thing you want to hit. We should think about how to get back to that embodied understanding of the world around us and what we can do with it.
----------------------------------------------------------------
* 3:47 If you're skeptical of the claim that the modern GUI is so old, check out the 1982 demo of the Xerox Star operating system (https://www.youtube.com/watch?v=Cn4vC...). I find it both amazing to see the foresight of the designers at the time and disappointing to think we've done so little to improve the conceptual models we use today.
----------------------------------------------------------------
* 7:10 The links created with the pencil tool are persistent when you move the elements around. That way you can build the structure and then try out different layouts of the elements to get a different perspective on them.
----------------------------------------------------------------
The demo is running in Firefox Nightly, using https://threejs.org to build the world and http://earwicker.com/carota/ for generating the text editors. I'm connecting to Chrome's voice input through a WebSocket connection, and retrieving pictures by patching through to a Google Image search page via a similar WebSocket-enabled hack delivered by bookmarklet. I produced the 3D models in Blender 3D. It's *really* not meant to be offered as a functional demo, but if you must try to run it yourself, the files are at https://drive.google.com/file/d/0B5xK...
----------------------------------------------------------------
Loading...
Loading...
Working...
Loading playlists...