 Welcome. Thanks for joining us this afternoon. My name is Megan Lindsay, and I'm the product manager for WebVR at Google. Today, I'm here to talk to you about WebVR. I'll show you the opportunities it opens up for web developers, how this is going to benefit the VR ecosystem as a whole, and what others are already doing with it. Then my colleague, Brandon Jones, will demonstrate how easy it is to build a cross-device WebVR app by doing it right here on stage. WebVR enables web developers to build fantastic cross-platform, cross-device VR experiences. Here's a quick overview of what WebVR is all about and an introduction to our recently released WebVR experiment site. VR should be accessible to everyone because it has the potential to let everyone explore, play, and create in amazing new ways. But right now, VR is pretty complicated. To make awesome VR stuff, developers might have to learn a new language and then spend a bunch more time to make that stuff work on multiple headsets. And then when we want to play with their awesome VR stuff, we've got to have the right headset. VR should be easier so developers can make something quickly and share it with everyone, no matter what device they're on. Kind of like how easy it is to share stuff on the web, but with VR. Well, that's the idea behind WebVR. It's VR on the web for everyone. Here's how it works. Say you're in a browser like Chrome and you come across a WebVR experience. You just tap the link, put on a headset, and boom, you're in VR. Developers can build WebVR things the same way they build web things with JavaScript. And since it all works in a browser, it's easy to make it work for all kinds of VR devices, whether it's someone using their phone, their computer, or their entire room. Developers are already building and sharing awesome stuff with WebVR. We've started showcasing their work on a site called WebVR Experiments. It gives you a glimpse into the kind of stuff that's possible. You can play simple games, see the world in a new way, explore interactive stories, play with a friend, or lots of friends. Each experiment comes with open source code to help others make new experiments. And developers can submit what they make. All of this is an effort to make VR more accessible so anyone can build and everyone can play with awesome VR stuff. So come and start playing at WebVRExperiments.com. I want to tell you why we at Google care about WebVR and why we're investing in it. As Clay Bavor said in the keynote yesterday, immersive computing is going to change how we play, work, live, and learn. We're at the start of the next computing revolution. Many of you have seen the technology adoption curve before. 2017 is shaping up to be a pivotal year for VR where it's moving beyond innovators to early adopters. This is a time of opportunity. However, one of the largest barriers to even broader VR adoption and more user engagement today is content. Content is absolutely critical to the success of any new ecosystem. Giving users great, diverse, and plentiful things to choose from will keep them coming back. I believe that the open web is exactly what virtual reality needs to take it to the next level, and WebVR is the first step along that path. WebVR opens up VR to the largest developer platform in the world. You, web developers, you can build for VR for the first time. The web is an open ecosystem that we at Google strongly believe in and support, where developers from around the world work together to innovate in a standardized, interoperable way. The web isn't controlled by any one company, and it's unique in providing access to content from any device through any web browser. There are no walled gardens here. What this means is that WebVR simultaneously decreases the barrier to entry and extends the reach of your content. Using WebVR, you can start developing for VR with gradual investments by progressively enhancing your existing websites. You can light up your site with VR when an immersive experience adds something special, from breaking 360 news on the ground to exploring your next home. With WebVR, you can build your experience just once to reach all VR headsets and the billions of mobile and desktop users, giving you access to the broadest audience possible. And you gain all the benefits of the web by making your content searchable, linkable, and low friction with no installation required. Sharing is as easy as a link. So the potential of VR goes well beyond gaming. What kinds of VR content just makes sense to do with WebVR? Your imagination's the limit, though I believe the first wave of WebVR content will be the use cases that are already first and best on the web. Femoral content found primarily through search and social media. Short form media content and important, but perhaps less frequent tasks, where you may just not want to keep an app around. So let's take a look of some of the things that others are already doing with WebVR. Matterport has created technology that allows capturing real-world spaces in 3D to view them virtually for industries such as real estate, travel and hospitality, and architecture, engineering, and construction. Matterport customers like Sotheby's, Home Visit, and Mansion Global have scanned nearly half a million places and make these available to their users with Matterport's web player. The web player lets the user navigate through the 3D virtual space on their phone or desktop. Before WebVR, users were required to download a separate app to view the full VR experience. This created a lot of friction and resulted in significant user drop-off. But now with WebVR, this friction has been eliminated. The user can step right into the home they are looking at directly from the website. And when the user exits VR, they're still on the original website rather than in a separate app. Matterport supports WebVR for Daydream View and Cardboard support is coming soon. With over a million scenes created and posted by their community, Sketchfab is the world's largest platform to publish, share, and discover 3D content online. With WebVR, any Sketchfab model can be viewed and manipulated in your VR headset. Content creators or enthusiasts can use Sketchfab to share or embed models anywhere on the web, enabling them to be explored either on a 2D screen or in VR. Pauster creates custom experiences for movies and music, helping with the discovery of major entertainment products. With the rise of virtual reality, Pauster used WebVR for the broadest audience reach and created experiences focused on movie websites, showtimes, and ticketing. Here's a look at what Pauster has done recently. So movie studios saw over five times more movie theaters selected inside VR than on the regular websites. Audiences viewed the 3D trailer and the 3D gallery images, and they converted to seeing the movies in 3D rather than in 2D in the actual theaters. And finally, from filmmaker Christopher Nolan comes the epic action thriller Dunkirk opening worldwide this July. Just as the film offers a first-person perspective, Warner Brothers want a technology to offer a deep, immersive perspective on just what happened at Dunkirk. They brought this vision to life as one of the first collaborative VR experiences on the web, showing the depth of soldiers camaraderie through a cooperative experience between two people. Working together to survive the evacuation, each player will become both the rescuer and the victim. Here's a taste of what's coming for Experience Dunkirk. Experience Dunkirk will be releasing in June and will be open to everyone, supporting 2D devices as well as VR headsets. A teaser of the experience is live today, so check it out. Other areas we're seeing particular interest in web VR include news, e-commerce, interactive VR films, education, art, and custom business solutions. We are eager to see what web developers use web VR for next. So web VR content's arriving and web VR browser support is already here. In Chrome for Android, we've released web VR support as an origin trial for Daydream View and Google Cardboard. Our friends at Mozilla, Microsoft, Oculus, and Samsung have all released or announced coming support for web VR, bringing it to Samsung Gear VR, Windows Mixed Reality, Oculus Rift, and HTC Vive. In Chrome, we're continuing to improve and extend our web VR support. In our latest release of Chrome for Android currently in beta, we've significantly improved performance, making it more consistent and stable overall, and making it easier to reach target frame rates by adjusting rendering settings. We've also released web VR support for Chrome custom tabs, enabling you to enhance your native Android app with your web VR content too. Looking forward, we have support for desktop headsets in development. And we're bringing great web VR contract right to the Daydream home screen. Stay tuned for more on this soon. We've talked about progressive enhancement of VR content and how this is a superpower unique to web VR. Let's dig a little deeper into what that actually means and how others have solved it. Weather.com recently released an interactive web VR experience called The Birth of a Tornado. They applied progressive enhancement and responsive design principles to ensure their experience can be used on any device by optimizing the interaction model for the device being used. On desktop, you drag with your mouse to change the viewpoint and click to interact. On tablet, you change the viewpoint by dragging with your finger and tapping to interact. On mobile, the phone's accelerometer is used to provide a magic window into the VR experience. For Google Cardboard, head movement is used to gaze and target and tapping the button to select. And Daydream View uses the controller for interaction. Birth of a Tornado also works with Samsung Gear VR and the HTC Vive. The model used in Birth of a Tornado can be used with many web VR experiences. And we have a library to help make this easy that Brandon will show you a bit later. This is just one way to support cross-device experiences. Another example is Dance Tonight. Some of you may have experienced the Dance Tonight project IO yesterday evening, also built in web VR. Dance Tonight is an ever-changing VR experience made by LCD sound system and their fans. It's made entirely from VR motion capture recordings of fans dancing to a new song by the band. Another special thing about the project is that it works across devices but playing to their individual strengths. On desktop and mobile, you get to be in the audience. On Daydream, you're on stage. And in RoomScale, you're a performer. If you didn't catch this in person, it'll be available online this summer. Your input choices and supported devices may differ for your web VR project, though we recommend starting with a goal of universal access as a best practice and just see how far you can go. While web VR content is still best experienced immersively in a VR headset, most people have still never tried immersive VR at all. Web VR content will be their very first hint of what they're missing. The great web VR content that you create will be the reason a new user decides to pick up their very first VR headset. I hope you're as excited about web VR's potential as I am. Now I'd like to introduce Brandon Jones. Brandon started web VR as his 20% project several years ago and co-authored the spec. Today, he's gonna build a cross-device web VR app for you live on stage. Please welcome Brandon. Thank you, Megan. So Megan talked about the principles of progressive enhancement. That is making pages that can be used on the desktop and mobile devices as well as across multiple VR devices. That can seem very intimidating, but the right tools can make it fairly simple. It all starts with creating some great WebGL content. WebGL is an API for rendering 3D graphics to the browser and is supported across all platforms today. There are many great WebGL tools and frameworks out there to help you bring your ideas to life. From there, turning your WebGL page into an immersive web VR experience can be as easy as adding a few lines of code. To show you what we mean, we're going to build a quick WebVR experience on stage today. The app that we're going to be building will be a 360 photo viewer, or a 360-degree photo viewer, which is a great fit for WebVR. These type of photos are easy to create with many cameras available that capture them and provide a fun experience that you don't get from traditional photos. Best of all, they can easily be viewed in 2D in browser with a click-and-drag or magic window controls while VR can optionally be used to provide an enhanced viewing experience for users with the right hardware. 360-degree photos also represent a class of content that's difficult to get users to install a native app for. Because of the content simplicity, the overhead of an install is probably enough to discourage most users, given that they likely only expect to spend a few seconds looking at each image. It's very likely that most users would never get past the app store link. Ideally, they could fluidly step into VR and out of VR with very little overhead, view the images quickly, securely, and then move on without having to uninstall anything afterwards. This sort of ephemeral experience is what WebVR excels at. So now we're going to switch over to the code and actually build the experience. Now, we're starting out here. Can we get the laptop up on screen? Thank you. We're starting out here with some boilerplate code. We're using 3JS. It's not the only framework that you could use for creating WebVR and WebGL experiences, but it is a fairly common one. There's also frameworks that are available that are expressly for WebVR content, such as A-frame or ReactVR. But because 3JS has fairly wide developer experience or fairly wide developer acceptance already, we're using that today as our example. Now, the boilerplate that we're starting here with is fairly simple, so I'm not going to cover it in detail. This is the type of thing that you would see in a 3JS tutorial of zero. And what it produces for us is a black screen. Well, that's okay, that's a great starting point. Thank you, that was awesome. So, because we're creating a 360-photo gallery viewer, we need images. Now, normally these would come from, of course, a database, a CMS of some sort, but we're just going to hard code them in for the sake of example. And then we need a way to view them. Now, because it's a 360 image, the method that we're going to use is to create a gigantic sphere, in this case, about 500 meters in radius. Invert it, and that's what the viewer's scale here is doing, is it inverting on the x-axis so that all of the faces point inward, and then keep the camera for a scene at the very center of that. That way, when you look around, you're seeing this sphere all around you that's practically at infinity. Next, we need to put our image on the inside of that sphere, using the 3JS mesh basic material. This loads up what's known as a texture in WebGL, and we'll apply it to the inside of that sphere, and it just so happens that 3JS's default coordinate systems work out really well for equirectangular images, which is the default that most 360 cameras spit out. Finally, we need to combine the geometry and the material together into a 3JS mesh, which is the basic primitive that it uses to render, and add it to the scene. So once we've done that, we can switch back to the browser and see that we now have a photo. Unfortunately, there's no interaction, so we can't tell that it's a 360 photo. We'll fix that by pulling in what's known as the WebVR polyfill. The WebVR polyfill is a JavaScript implementation of the WebVR API. It's targeted primarily at mobile devices. It uses their accelerometers to provide basic head tracking used for a cardboard style experience. It also happens to provide us with a basic emulated click and drag mode on desktop that we can use to get basic functionality on this desktop computer. Now, in order to make our application responsive to the WebVR polyfill, we have to add a 3JS extension called VR controls. We'll attach this to the camera, and then this makes it so that any head motion that happens on your headset is automatically applied to the camera itself. In order to make sure that it keeps updating with the head motion, we also need to add an update function to the animation loop. Once those two elements are in place, we now have basic click and drag functionality. And we can see that we actually now have a complete photo viewer for a single photo. But that's not terribly interesting. We want a gallery. So the next thing that we're going to do is provide a 2D version of the thumbnail gallery that allows us to switch between images. We'll loop through the image gallery that we had loaded up previously, load a texture for each, and then pass them to this add to gallery 2D function that we're about to define. Here, we're going to use a little bit of basic HTML manipulation to create a container div, add a simple class to it. I'll leave the CSS as an exercise to the viewer. Append the image that's associated with our texture to that container element, and then add a click handler. When we click on this thumbnail, we're going to swap out the texture on our gigantic viewer sphere with the texture that's associated with the thumbnail. And this will give us the basics of iterating through the gallery. Now, I should note, you can see that here, I should note that this is actually a terrible practice because normally you would want to use smaller images for the thumbnail to not impact loading time. But because we're doing everything locally here and for the sake of time, I'm skipping over that. But you can see here that as we click on each of the images, we can cycle through the various items in the gallery, and they're all viewable. At this point, the 2D site is done. We've done everything that we need to work both on desktop and mobile. But we're here for WebVR, so let's figure out how we allow people to dive into VR from here. The next thing that we're going to pull in is a utility called WebVR UI. This is a library created by the Google Creative Lab that provides a button that advertises WebVR support to your users. It will also communicate to them if they don't have WebVR support. In order to add that to our application, we need to go in, create an instance of the WebVR UI button, append it to the DOM, and then in this case, we're going to ask it what VR device it's associated with and cache that off for later use. If we switch over here now, you can see that we now have the button that normally would tell us that we can go into VR, but because we're on a desktop site, we can't actually go in yet. That is, we don't have the hardware connected so we can't go into VR here. We would be able to go on mobile and I'll switch over there in just a second. Now, even if we could go in because we have the correct hardware, we haven't actually wired up any of the VR rendering yet. We'll do that with another 3JS utility called VREffect. This makes it so that the content that would normally go through the renderer and show up on screen will actually be rendered twice for a stereo view. Using the correct parameters that it's going to query from the WebVR API. We also need to update the animation loop to make sure that we can properly handle when we're in VR mode versus non-VR mode. We do this by asking the EnterVR UI if the user has clicked the button and if it's presenting and if so, we'll render the scene using the VREffect. Otherwise, we'll render using standard 3JS render and then the last thing that we have to do is make sure that our standard request animation frame is actually using a VR specific variant if it's available. This makes sure that if we're on a desktop device where the VR headset runs at a higher frame rate than your average monitor, like 75 or 90 Hertz, we're running at the same frame rate. Otherwise, the user will experience a lot of stuttering in VR and come away possibly sick. So, we will update that. And at this point, we'll need to switch over to our Android device to see the rest of the experience. All right, great. So you can see on Android, we now have that nice magic window interaction mode where we're able to spin around and see the 360 photo without going into VR at all. This is great if you just want to showcase, one, that there's 360 content here because the user's natural hand motion will give them a hint that there's more to see. And two, just gives people a preview if they're, say, sitting on a bus and maybe don't necessarily want to blindfold themselves. However, once we hit the Enter VR button, we can now switch into VR mode. And at the moment, we're configured to use a cardboard device. And you can see that we would also get the nice stereo view. Now, these images aren't stereo, but you could render a stereo view of the scene. And it would work correctly in any cardboard device. So we've now created a basic web VR-enabled 360 image viewer. But once again, in the VR side, we've only done it for a single image. And that's not great. Especially when you're using mobile VR, you don't want to force the user to put the phone into a headset and take it out repeatedly in order to navigate between different elements in the scene. So ideally, we'd like to take the 2D gallery that we've created here and pull it into 3D to allow the users to select between the different thumbnails there. Now, this gets a little bit more complicated than the DOM elements, because we are dealing with 3D. So a lot more math is going to be involved. But the basics are still pretty simple. We're going to be using spheres, once again, just like the larger viewer, to represent the individual thumbnails. They're just going to be much, much smaller this time. And then down in, when we're looping through our gallery, we're just going to add items to the 3D gallery, as well as the 2D. That looks like this, which now we start to get into a lot of math, because it is 3D. Trig is kind of a part of the course. But skipping over the exact details of what we're doing here, once again, we're creating the texture to go along with the image. We're associating it with each of our thumbnail spheres. And then the positioning code here just puts them in a semicircle around the user's waist, somewhere that's kind of non-intrusive, but easy to reach. So if we switch back over to the Android, we should now see, yeah, we have this nice semicircle of the same thumbnails, once again. Now, this is cool, but that's probably not the experience that you want to leave your users with most of the time, because we're doubling up on the thumbnails in the non-VR version. So over in the code, we'll just add one more line to say, if we're not in VR mode, let's just hide the gallery. Makes things a little bit cleaner. So next up. Now, we've got the thumbnails, but we don't have a way to interact with them. And we're going to add that by using a library called ray input. Ray input is the library that was used by the weather.com example that Megan was talking about earlier. And what it does is provide a single unified model for cardboard, daydream, or higher end desktop experiences with six-stop controllers. In all cases, it gives you a cursor and a ray that are based off of whatever the user's capabilities are and uses that to cast into the scene, find an object, and allow you to click on it. So to start off with, oops, went a little too far. To start off with, we have to instantiate the ray input. We provide it the camera so that it knows where we're looking. We set size. That's just a little bit of bookkeeping. And then add the meshes that are associated with that into the scene. This is so that we can get the cursors that are associated with it in the ray itself. Like the VR controls, we also have to update this every animation frame to make sure that it stays in sync with the controller movement in this case. We also want to make sure that we know when we're actually selecting each of the thumbnails. So we're going to modify their opacity whenever the cursor is over the top of them. We'll start them out with a lower opacity. And then we'll have some event handling here that makes their opacity higher as the cursor hovers over them. One bit that I skipped here. We do need to loop through all of the thumbnails in our gallery and let ray input know that they are selectable. Otherwise, it will also try to select the larger sphere in the back, and that doesn't do us any good. And then finally, the last piece is that we say when we have clicked on whatever input, whatever the primary input is for our control mechanism, we're going to do the same thing that we did for the 2D gallery, which is swap out the texture for that thumbnail for the larger sphere. Now let's see how that looks on mobile. Well, let's save it first. Then we'll see how it looks on mobile. OK, so you can see, because I skipped over this, we no longer have the image spheres in 2D, but if we jump into the VR mode, and there we go, we now have a nice cursor that can swivel around and select the different spheres based on our gaze. This is, once again, a cardboard mode. And if we clicked on it, it switches the thumbnails. So now we have a fully functioning gallery that the user does not have to leave VR for in order to switch through images. Now, to demonstrate that this works correctly with more complex input mechanisms as well, we're going to come out here. Well, that's not what I wanted. Let's sync this up with a Daydream device. And then the normal part of the Daydream entry flow is that we have to sync up the controller, which takes just a moment. And then we should be back into our same experience. With now, you can see a basic ray-based selection cursor that does not depend on the movement of my head, but can still be used to do basic selection from the gallery. So that's it. In about 150 lines of code, we've created an experience that works on desktop, mobile, and VR, both cardboard VR, Daydream VR. And if we were to put that on some of the larger desktop systems, it would even work with an Oculus Rift or a Vive. So let's switch back to the slides. So to summarize some of the recommendations that we covered during the development, at this early stage, we should be focusing on apps that can be used in 2D with VR as an enhancement while the VR ecosystem is still growing. We should also strive to allow users to stay in VR for as long as possible, as frequently switching between the 2D and the VR modes can get a little tiring. Finally, there's a variety of input methods across all VR devices. And using library-like ray input helps normalize that into a single interaction model that's common between all modes. So now I'm going to turn it back over to Megan, who's going to tell you a little bit more about the future of Chrome and VR. Thank you, Brandon. Everything up to this point has been about bringing VR to the web. Now I want to talk about bringing the web to VR. Today, when you encounter a web VR link, you drop your phone in the Daydream Viewer, and then when you're done, you take the headset back off. Soon, you won't have to take it back off to continue your browsing. As we announced in the Daydream keynote this morning, we're bringing the full Chrome browser and the entire web to VR for Daydream View first. You can use the Daydream controller to navigate regular web pages and follow links. And for web VR experiences, you get transported into fully immersive worlds. You'll be able to watch videos in a large screen theater-like experience. Plus, Chrome and VR is the same app that you use for browsing in 2D. It shares all of your tabs, bookmarks, and history. You don't have to re-log in to websites and VR. Things just work. VR browsing is coming to Chrome for Android later this year. So what's next? Take a look at our web VR developer portal with some great tutorials and case studies and the helper libraries that Brandon showed us earlier. Check out the full set of web VR experiments online and consider submitting your own. You can also try out some of the web VR experiments here in person at I.O. in the experiments area. Thank you so much for joining us today. And I can't wait to see who you come up with.