 Welcome to my talk for Fullstack Fest, FutureJS. It's about web VR. My name is Joma Sánchez. I'm, despite on Twitter, I work for a company called Be Real, it's a production company. We specialize on online web experiences. We've worked through the years on many projects with WebGL, Canvas, Web Audio, many interesting interactive things. In my personal page, I have all kinds of experiments with new technologies and experiments. My relation with web VR, my interest with web VR, comes, of course, through 3D graphics. I have a background on, let me, by the way, start my timer, because that's important. I come from the demo scene, if you know what it is. It's an underground movement of creating real-time graphics and audio, trying to explore the limitations or the capabilities of any given platform. It used to be for PC, Amiga, Commodore, but now there's a bit of a revival with the web, with Canvas and WebGL being available. So, of course, when we come to explore these new technologies, it's very interesting to see that there's these new devices, and they're really providing a new revival of virtual reality, as we knew it a couple of decades ago, and maybe this time it will catch on. So I'm going to try to explain a bit. I'm going to do an introduction on VR, what we usually call VR. We know it's an umbrella term of many different techniques, many different approaches, because basically we all understand what virtual reality is. We are all familiar with it through movies and books and many stories, but what it is exactly. I mean, it's the act of replacing reality to a user by basically creating a new visual input, so replacing what everybody's seeing with images and linking it with some kind of positional tracking, which makes that the movements in real life match with the movements in the virtual world, so it creates this seamless identification or presence with the virtual world. But of course, if you have a flying experience in which you're using a fan to create the sense of wind, that fan is literally not really part of the VR solution. It's something that you use, so what's VR, what it's not. Let's go back to the kind of beginnings of what we know as VR, like when it got into TV and movies. This is kind of the best snapshot I've been able to find of what was back in the era. It's really not entirely representative, but basically we had small CRT monitors. That's why the headset is so deep. They weighted a ton, so they usually needed some kind of counterweight for the user to be able to move their head. And there was a few of these kind of weird devices to track hands and arms positions. But in itself, it's basically this display and a set of sensors in the casing that follows the tracking and positional tracking and the rotational tracking. So whatever your head is doing, it's tracked by these devices. So we got in the 90s a lot of these kinds of things, which you've stopped seeing, but maybe it will come back. And the things that the headsets were pretty heavy, the graphics resolutions was not very high. The sensors especially, in order for VR to reality to work, you need a very, very small latency. Latency is the time that it takes from one signal in the sensor in the physical world until it gets through all the chain until the last pixel it's updated on screen. So what we know now, it's over 20 milliseconds, which is really, really low latency. Over 20 milliseconds, we start to get some kind of desynchronization between what our brain perceives of this virtual reality and what actually is happening. So it creates some kind of lagging and dizziness. So basically what we had is this bulky headset with lag brakes. And the graphics weren't entirely amazing, because if you remember before the GPU revolution in the mid 90s, the 3D graphics were really not that good. I was looking for a picture of a game. I remember playing around 93 or something like that. It looked something like this. And this is not because of the JPEG. It was actually like this, blurry, but with lots of scan lines, because you were actually watching a CRT up close, which is probably not very good. You know when they told you not to get close to the TV? That's what we were doing back then. So that's the thing, it's not entirely convincing. It's not like it can replace reality. But the good thing is that if you get very low latency, so it reacts very responsibly to what you do, it doesn't matter the quality of the graphics, because your brain thinks, oh, that's right, I'm in a world that it's low polygon now, but it's a real world. But that didn't happen back then, because we had very laggy rigs. So that was all these kind of things. So people kept going and trying to experience VR. Usually you go to the arcade, those were very expensive matches that you had to pay for a few minutes of basically dizziness and drowsiness after playing one of these. But people kept going, even though all these advantages, because it was modern, because it was the future, because it was cool, but it didn't catch on, basically. So the problem is that it really left you sometimes very, very busy for a few hours. It was expensive, because those were dedicated hardware machines with high maintenance cost. And the content was pretty much fragmented. There was no way for a developer to create content that would run on anyone's computer. That was out of the question. You had to develop for specifically that kind of thing, or the other kind of thing. So far away from what we would consider a consumer approachable solution. So it kind of went away. During all these years, there's been head mounted displays for virtual RIT. Sony has been producing them, but not really improving them. But the concept of virtual reality kept on society through movies. OK, how many people can identify the picture on the bottom right? What movie is it? What movie is it? The Matrix. What about the top right? Strange days. What about the top left? Genomic. What about the bottom left? The Lomormen. Yeah, the best, worst virtual reality movie ever done. If you haven't seen it, I don't recommend you to watch it, because it's probably a painful experience after all these years. It's good. It's not as bad as virtuosity. So we kept having all this. It wasn't a popular culture. So we can skip forward to 2012. Lucky Palmer was working on the meant-to-be-seen 3D forums explaining how he was working on his garage, creating a new kind of prototype for virtual reality. And I think by his sixth iteration, he had the idea of crowdsourcing this prototype as a do-it-yourself kit of virtual reality and posted the idea. People were like, OK, that's cool. Another one was this time. And then John Carmack, if you don't know who John Carmack is, you only have to know that he's basically a legend in 3D graphics. He's single-handedly advanced gaming 3D engines. And the first person should be a genre by itself, creating doom and quake and many variations of 3D engines. So he was also in the meant-to-be-seen 3D forums. And thought, oh, that's cool. He had been running his own experiments. And then he took, yes, for a prototype, did some tests, some pretty fancy tests, like taking a video camera and recording with a program that would change the output of the screen when pressing a button. He would record that, press the button, and then measure the frames that it would take since he had pressed the button until the last pixel had changed. And then he had some solutions for improving latency. And then he said, it's good. I think I dig this. And people collectively lost their mind. So the gig started when, pretty successfully, they raised $2.4 million, like almost 1,000% of what they wanted to get. And so they released the DK1. That's this development kit for developers. There was no for our consumer. It was basically for developers to try the first thing. And basically, the rest is history, as they say. There's the, let me see if I can reuse this a bit, because it's probably, I cannot. OK, trust me. There's the DK2. There's the Crescent Bay prototype. This on the bottom left is the actual CV1, the Oculus Rift itself. That's the HTC5 from Steam, from Valve, sorry. Facebook bought Oculus, and then here we are. So we are now, what are we doing? We basically, yeah, I know, I'm way ahead of you, because we always use this image. It's very funny, right? It's very funny to use this image, because it's the amazing thing of virtual reality. But I found maybe better representation. But yeah, that's maybe the future of VR. I don't know. I'm not judging. So yeah, back to our friend. So we have the, that's true. By the way, that happened in Japan. So we still have the display. So the casing has the lenses in front of a flat display, because displays nowadays are much better, so we don't have to use tubes anymore. The sensors are also, in the casing, are much better, much, much faster with lower latency, because both the technology, I mean, the military has spent a lot of budget in these kind of things. And the production process has actually improved. So we have a pretty neat solution in there. The only problem is that the graphics are being rendered on a CPU. So we basically have, we're tethered. This is the DK2. So again, we have display. We have the positional and rotational tracking. And all this goes into an HDMI port, and the computer is rendering the graphics. So this is basically a display. It's not that dumb what it does. I mean, it's not just a screen, but the guest is this. So of course, then, it means that you have to have a kind of a powerful computer, because this has to be running what they expect now. It's 90 hertz in stereo, so it means basically drawing the two different eyes at 90 frames per second. Oculus has released a specification for how your computer should be, how powerful it should be. And of course, many people still cannot afford that. But right now, their market is for gamers, so they probably can spend that. There's already three different solutions, like mobile VR. That's also something that Oculus has been working with Samsung. John Kerman has been overseeing this. It's the same idea still. The casing has the lenses, but the display and the rotational and positional tracking, the rotational and not the positional. It's on your phone, so it's a note for. You just put it there and then use it. The great advantage is that you're not tether anymore, so it's perfectly free movement. This advantage is, of course, that it's not very powerful. It's not as powerful as many other computers. But we can consider that those devices are pretty powerful, actually. So there's a lot of possibilities for that. And the hardware and the platform will just improve. It also comes with a remote, like a controller, which uses Bluetooth, so the linking is direct and it creates a much more immersive experience by just being able to take a pad and play with virtual reality. It's pretty amazing, this one. It also brings the funny thing, that it has its own birded guy funny face. That's what happens when you use it without a note. Then there's the cardboard. The cardboard, it's a Google solution approaching this problem. It removes the actual casing and assumes that basically everybody has a mobile phone, so the platform is already there. You just put it inside. There's lenses. This is called cardboard because it's made of cardboard. You can make it out of a pizza box. There's instructions, or you can buy it on Amazon. Nowadays there's plastic ones that can fit different sizes, so the iPhone also works. This second generation has got a button to modify because this one used the interaction by having a magnet and you would, when pressing this you would create a disruption on the sensors and that would be interpreted as a tap. So not very available to all the different phones. And basically developing for VR, nowadays we're using Unity and Unreal Engine are the masters. And basically what most people are developing are 360 videos, so they shoot it or render them and you're basically staying and looking around and an experience that drives you through whatever it's been created. Immersive experiences, which can be roller coasters or medical visualization, they're using VR to treat post-traumatic stress disorder for instance, by exposition, or of course gaming. Okay, so what is web VR? Now we get to the specifics of web VR. So most people ask, is just the idea of having VR in the web? Is it just a set of guidelines of how we should create content when we want to put VR in the web? Is it a kind of a hyperspace in which our browser is in 3D and we can browse the internet in 3D? So web space VR, it's basically virtual reality in the web. So basically every page that is running a stereo and it's responding to the user somehow, position of rotation of the head, like a carbon for instance, that's web space VR. But web VR in one single word is an API. It's a JavaScript API that enables the web to be a platform for VR using JavaScript and the browser. It's basically, given that the different options in the browsers are not designed for high performance and low latency, they come up with a set of solutions that the browser implements so you can actually use the HMVs and the sensors. So web VR, what's the advantages? It's still the advantages of the web content on top of VR. You don't need to install content, you don't need a plugin. You can address a lot, as a developer, you can address a lot of platforms. It's universal as the web it is. And basically the most important thing, you can link content. The URLs are what makes all this discoverable. So you're not in a single application in the internet. So there's some examples here. I'm gonna show you one of the first which is, and hopefully it should work if there's connection. Let's see, because most of this stuff, it's online. But this is basically, okay, a little link. You can see, for instance, that this is Brandon Jones, Quake 3 viewer of the BSP of the levels. And there's this option for VR, but I think it's taking very long here. Let's see this one other. It's, basically, this is Mozilla approach to VR. They created the most VR project. The Josh Carpenter and Mr. Doop created this flyby demo, showing how what was possible. I'll talk about this later. Oh man, not super fast to be, okay. Anyway, believe me, they're pretty cool. There's more, there's Primrose, which is a text editor on virtual reality. There's Shader Toy. Shader Toy is a web for community of Shader creators that has recently added VR support. Inspirities and experience by Unboring, by Arturo Paracuellos, very nice. There's virtual reality videos. These are the links for all these projects. So the WebVR API has a specification, has its entry on the Mozilla developer network. You can check it there. It basically started when Vladimir Bukhichevich from Mozilla. Vlad is a software engineer, basically in charge of everything that has to do with graphics, canvas, gaming. And you might know him from starting this little thing called WebGL. So he got that started, and basically took the SDK from the Oculus and integrated it in a nightly build of Firefox. And Brandon Jones from Google, another software engineer working on the WebGL Chrome team, did the same with Chromium. So now we can download the versions of those browsers that run all the required things. So WebVR is a very simple API. I mean, it's not simple in the sense of, I mean, it's fairly small. It doesn't do a lot yet, but it can grow. It basically allows the JavaScript layer to interact with those headsets. So they get the hardware. As we say, this is one of the hardware, but it can be more. And it's not very clear what exactly it will be. But for now, we assume that there's these things. So how you use the API, you basically have in Navigator, you get the get VR devices. It's a promise that when fulfilled, it returns the list of devices that are in the system. As usually a list of either an HMD heads mounted display virtual reality device and a position sensor virtual reality device. If you don't have one, if you have one connected, it creates a Moculus Rift, which fulfills the data. It doesn't have the positional information, but it's there so you can still develop. So the HMD VR device has a few interesting methods. Basically, you can query it for the eye parameters, for the left or right eye. And you then get the geometry, like the definition of the geometry for that device, how the lenses work, how you can construct your camera. This is more into the realm of 3D graphics. Don't worry, but the information is there. It just gives you a fairly nice definition of what you can do with that lens system. And then you can set it based on your scene. And the position sensor VR device, you can also ask for the state, and it will return all these different methods, angle of acceleration or velocity, linear acceleration, the orientation and the position. The position, okay, this is wrong. Orientation will be a dumb quaternion at times time. So you basically query it, and it's gonna be giving you the different orientation of the device. So you get your camera, which is constructed based on the physical attributes of this HMD of the display, and then move it in your virtual world with the information that the sensor is providing. So this is fairly complex, and you're basically not building your whole 3D engine. Most people don't do that. So the same thing that happened with WebGL, that it kind of evolved and matured thanks to 3JS. 3JS is a JavaScript library created by Mr. Doop that you can use for basically creating graphics and addressing WebGL without all the different complications of the API. So with the most VR project they started this VR effect is this VR controls, which is an abstraction on top of the renderer of 3JS and the device orientation control. So you can render on an HMD and take an HMD as an input for the rotation of the camera. Okay, so if you have ever coded for 3JS, basically you first create a renderer which is gonna get your graphics on the display. You attach it to your document, so it creates a canvas and puts it on screen. You create the scene, which is what is gonna hold all the information of your world. Create a camera, create some controls. In this case, orbit controls. Basically you can rotate around your scene, create your geometry. In this case, this create world function and just put some cubes on the scene and you just run your render loop with the request animation frame. You go over, update the controls which take the information from the, from whatever you've done with the mouse, updates the camera and then renders the scene. So basically what you get is this. This is a 3JS scene. You can rotate it, you can zoom in and out, okay? So this would be your basic code. It of course gets more complicated, but this is fairly the basis. So what we do to enable this for WebVR is fairly similar. The only thing that we do is the controls that we instantiate are VR controls and we specify the camera we want those position and rotation be assigned to and the effect, it's a VR effect and we specify the renderer that it's gonna take over. We set the size and basically instead of rendering with renderer, we render with the effect. We also need to add the double click action to enable the full screen because to take full screen in a browser we need the user's action. So this is basically like this. This is, if I double click, I go into VR mode and maybe everything crashes or something like that. Okay, this is interesting. Window, oh man, this is always like that. Where are we? Where are you? Awesome. Okay, bye. Okay, let's try now. Okay, well, it won't enable the full screen, so you won't see the distortion, but the thing is that this still uses the positional orientation on the track. And basically you can move the camera too, so it detects where you are. So this is what it takes to take the original scene and turn it into a WebVR-enabled scene with 3JS. So now that we're already rendering stuff, now it's important to know that there's this very important thing to remember. So when we are doing 3D graphics in a 2D display, the actual sizes of things don't really matter because you are not able to really perceive scale with depth cues. But when we're doing 3D with WebVR, we really need to stick to sizes. So the agreement is that one unit, one unit in the virtual world, one unit in JavaScript, it's one meter. So keep in mind that because if not, you might have a scene that when you use it on the VR device, it might look like you're putting your head in a very small scene, or it's a huge screen and you're at work. You can use that on your advantage, but you have to be aware of that. So also don't move the user without their actual interaction. Yes, latency is important in this sense because if not when you're moving, like your brain automatically assumes that when you're doing an action on a controller, your head and your physical body will react to that. So if there's lack, you start to get dizzy and yeah, it gets really, really pretty terrible. When you're a seasoned veteran, you just go like this, but still feel kind of nauseous. Cool. So what about cardboard? What about cardboard? Kind of same thing actually. You instantiate the renderer, add it to your DOM, create your scene, create your camera, and the controls in this case are device orientation controls which use the gyroscope on the phone. And the effect, instead of a VR effect, it's a stereo effect, but for the rest, it's the same. So I'm gonna show you another demo that this should work. I'm gonna simulate a Nexus 5. And I'm gonna... So this is basically what you would see on your phone, right? And if you use it with the lenses and your head with the phone put it there, it's actually pretty nice. I mean, it's one of the best solutions for straight VR. So now we're talking about cardboard. It's definitely more accessible, it's more available. Now that it works with Nexus devices, with iPhone devices, with many Samsung devices, it still has the limitations of the web and the mobile web. So we have to find ways to keep the display awake. We still don't have control over how the screen behaves from the browser. We don't have touch because the screen is inside this case. Performance is pretty not good. We have to apply lens distortion by using shaders on the mobile which might not be the best solutions on WebGL. The sensors are far from ideal yet. So there's lots of things to figure out. We don't have lock orientation screen orientation lock API yet. So we can say, no, I want this to behave on, to act only on landscape. So all these things that you've seen, we have kind of the same thing with different controls, different renderers, different effects. So Boris Masch from Google had this idea of the responsive WebVR because it's really pretty dumb that we have to create content that could run on many devices because they have the features, but we don't actually have the, we're not reading the right sensor or not using the right output. So if you want to do these right ones run on every VR headset and came up with the VR boilerplate, let me see if this is still... A new starting point for building responsive WebVR experiences that work on popular VR headsets and the grace rate falls on other platforms. It's built on top of this 3JS VR effect and VR control and it implements two interesting things. One is WebVR polyfill, which unifies this kind of different devices and HMDs in a carbon HMD device, a gyro position sensor VR device, which kind of abstracts the device orientation events and the mouse keyboard position sensor VR device, it's a mouthful, but that basically means that you can use VR scene even though you don't have a rotational positional sensor by using the keyboard. And then the WebVR manager, which is pretty useful that establishes common UI elements, you've been seeing that there's this kind of button that it tells you go into VR mode, go into carbon mode, go into Oculus mode. It implements VR best practices. So for instance, on the carbon, it helps you tell the user, don't put it on portrait or on landscape or hand us the transitions to full screen on every platform. So what happens with the input devices? There's lots of input devices and every VR solution comes with its own. They will basically be position sensor VR device, but there's many other options that we can use in the browser just using JavaScript. So there's the keyboard and mouse that we've seen, there's the limotion that tracks your hands. It gives you a kind of complex representation of those hands. So it might not be straightforward to implement on VR even though it's very powerful. Maya, it's a bracelet that tracks your arm movements and your hands by reading the muscles and nerves reactions. We have gamepad controllers. There's the... Seriously, this is pretty annoying. Okay. Okay, so ideally, okay. This happens a lot. Like it's got some kind of... Ssh, seriously. Okay. So, well, this is an Xbox controller. So you plug it and there's the gamepad API that basically you can query to get the state of your... Let me see if I can get back to those. Okay, this was working, but all this mess of USB is probably having a lot of interference or something. Amazing. Thank you. So basically, you can move your camera with a controller and it's important that there's low latency because basically when you press a button, it has to take really, really small time in order for the user to not get dizzy. So another thing you can use, for instance, scrolling. So there's this device, it's the PowerMate. It's basically a scrolling wheel. And you can use it for interaction because the important thing with VR is that once you are immersed, once you are in VR, you basically lost complete track of your surroundings. So it's not like you can easily find the keyboard if you move around. And usually you move around because you're looking everywhere. So this would be, it's not gonna work, of course. Anyway, okay. Okay, so this would be what you were seeing and you can play with this. It's upside down. See? If you do that to someone wearing a VR device, you probably screw with their head because I'm upside down and nothing is wrong. So it's like this, basically. Or same thing can happen with this. Let's see if we do the same. This would be like a game that you can play in the browser. You can rotate. It's pretty simple. If you, all these demos come later, find me and we'll run them properly and you can use them with the VR. It's there, pretty fun. Okay, so what about cardboard? So we need to improve the shake detection from the generation one, which it's not available for the browser because you will have to read the sensors, but it's really not a nice solution. We have to work around the limitations of the generation two, which basically that button creates this kind of touch on your screen, but it's not amazing. We've tried computer vision because cardboard is a nice solution. Usually it has a camera and with a good user media you can read the camera. It's pretty intensive, CPU intensive, but you can still create some interesting solutions by tracking AR marks and markers and things like that. We can use voice recognition. It also works. It's a bit messy when there's many people in the same room using the same solution. We can use WebRTC control. So we can have a server that reads the game pad or something and creates peer to peer connection with the phone and basically sends the position and the rotation. It might have a lot of latency, so that's important to think. Worth noting most of this only applies to Android because Safari iOS doesn't support most of these features, but still it will probably get there. So one interesting thing while we're talking about transforming the camera with sensors, this would be the normal code when you create a camera controls and re-rendered with it, but what happens then is that your head, this is your head on the virtual world, gets positioned in the origin, so you cannot move it because basically the device is telling itself how it moves. So if you want to put it somewhere else in the scene or you want to move it around or you want to move it with a controller in 3JS, what you have to do is create a 3D object, a dummy, and then have the camera be a child of that dummy and then you can move the dummy around in the scene and that will set the origin for the position. So you can look whatever you want to be. Important in immersion, 3D sound. We have the Web Audio API, pretty stable, pretty nice. There's these articles about how to create a special audio, how to set the positional. That was the subject of my talk last year about Web Audio, but you can use 3D audio, you can use cones of hearing, you can even calculate the Doppler effect of moving listeners and emitters. What about cardboard? Yes, it works. We have Web Audio API. We have Web Audio API even on iOS, so good. So what's the future for WebVR? What are the challenges? There's actually many. Technically, we cannot be sure that we can hit the performance that is required for nice non-dising VR, which is 90 hertz under 20 milliseconds of latency. Maybe Stephen later has an opinion of that. We'll see. Do we need a declarative language for WebVR using HTML and CSS? Mozilla has been exploring this kind of media query that you can define how a page would behave if it's in a WebVR enabled environment. So you can have a specific kind of markup or a panorama that shows as an environment of the page, so it's more immersive. We definitely need tools and libraries for developers. It's probably going to be a complicated road to figure all this out, because it's basically all is new, all is really the future. We need to figure out best practices. Basically, how do we establish that? That also applies to VR. How do we create the directional communication patterns? We don't have a click. We don't have a way of pressing things. Or we have it like this. Maybe this is not the right thing. What we're doing now, it's focusing on an element in 3D, and then there's a timer. It might be the best solution. It might be a better solution. You might walk into a 3D virtual element like a wall, and your body is going to reject that. So we have to find a way of moving the user from that situation without actually moving them and creating dizziness. So there's the blink in which everything fades out, and you fade in again in the original position. We have to be aware of the security concerns of this kind of web VR-enabled web. If you were browsing the web, and we got into, imagine getting into one of these scare jam sites in VR. I mean, we're probably going to kill someone by a heart attack at some point. You have to be aware of that. So basically, as summary, there's always some flux. There's still no consumer version of this product. The API is in flux, because we're figuring this out. The implementations are in flux, and it's a pain because it doesn't work after our data, after the SDK, after the Chromium. The hardware hasn't found a user base yet, but yet there's a ton of content. If you check VR, there's an amazing amount of things out there, and the web can be part of that. So it's really a new world to invent. Go on, create. My introduced notion, it might make you hallucinate, I guarantee that, but it's a fellow woman fuzzy insight. And I'm not going to show the demo. I'm going to show the demo, but seriously, come try this when you can. Go find me. This is amazing. It's so simple, but it's amazing what you can do with this. So that's me. Thank you. Those will be the slides, if you have any questions.