 So I am Chris Wilson, I'm here with my colleague John Pallett, who will be out in a minute, and we're here to talk about the next great platform, the Immersive Web, and this is actually not running off my laptop, so hopefully I won't have any problems, at least they won't be mine. Now, we talk about this word immersive a lot, and I kind of wanted to define what I meant by the immersive web, and at this point I think pretty much everyone has heard of virtual reality at least, hands up, who saw Ready Player One? It's actually a really small percentage, you should go see it, it's a good movie, it's good entertainment, so it's totally just like that, well not really, but virtual reality is all about immersing yourself in a completely alternate reality, putting what I refer to as the reality blinders on, completely replacing everything you can see and usually hear, and immersing yourself in this totally different world. That world may be a game, it can be like visualizing a data set, it can be a virtual workspace, my kids like to play this game where I put on a VR headset and they see how close to me they can dance before I notice that they're there, which usually it's pretty close, but certainly when I'm at my desk at work this is my favorite place to go into VR, because it kind of masks everything off around me, the sea of cubicles isn't there anymore, and you usually experience virtual reality through a tethered VR headset like a Vive or a Rift, standalone devices like the Oculus Go, or smartphone VR systems like Daydream View, Gear VR, or Google Cardboard, my personal favorite, and any of these devices end up using a combination of head tracking, screen display, optics, and controllers to make you feel like you're present in a totally different world. Now at Google, we've been working on exposing this to the web for really quite a long time on all these devices, from high-end desktop headsets on Windows to a polyfill that supports web VR on any smartphone in Cardboard, yes, even Safari on iPhones. And in fact, you may not even have a VR headset, you might not even have this 29 cents worth of Cardboard, WebXR and the XR polyfill can actually view VR worlds just on a mobile device using the accelerometer and orientation API, so you can look around a 3D scene, and this lets users look around your 3D world even if they don't want to drop their phone into a headset. Now in addition to bringing VR to the web, we also, like my team actually works on bringing the web into VR, at least on Daydream devices. Starting with Chrome 67, you can actually launch a VR version of Chrome inside the Daydream home screen, and we put a lot of work into making browsing the traditional 2D web a really great experience. But of course, the really cool part is when the browser in VR can be used to browse immersive worlds, like you can hop back and forth between the 2D web and VR content that's hosted directly on the web. So this gives you a really great, easy experience and really actually totally immersive. It's like you're just navigating inside that world. Now, having a browser inside the VR world is so useful. It turns out 83% of Daydream users also regularly use the browser in VR. Like this was kind of something we added on after the fact. Daydream shipped without a browser, and now, like this is a regular occurrence for most of them. This shows how important the content from the web is, even when you're just living inside a VR world. But enough about virtual reality for a second. I want to talk about when you don't want those reality blinders on. I actually like to interact with my kids. I want to be able to see them and not have them dance in front of me. And the most exciting extension of the computing platform, to me, is the concept of augmented reality. Not just overlaying AR stickers and animated characters. Or being able to drop virtual objects into your reality. The key to understanding AR's potential is that it's really about the concept of your computer getting to see, getting to see a world around you, interpret parts of it, find surfaces in the future, recognize objects, and then augment that reality with virtual bits of user experience. Instead of trying to totally replace your reality, we really want computers to just convincingly blend virtual and real experiences. Now, for AR, there are some headsets like the HoloLens, the Magic Leap One. There are projection systems to display on real world systems or real world surfaces. But most users will probably first experience AR like I did, using a camera pass through experience on a mobile device, showing things like AR stickers. And the cool thing is, if you think of the things the web is really, really good at, the long tail of software content products you expect from the web, the experiences that users will probably happily click on. But they wouldn't necessarily install on their devices. The massive success of the web as a commerce platform is a huge, huge benefit. You can start to see how enabling developers to build immersive experiences that are delivered in this really ephemeral fashion is a fantastic idea. You don't have to install an app to see how that couch is going to look in your living room. You don't have to install an app to view an immersive video trailer. The ephemerality of the web makes the ability to do these immersive experiences a fantastic match. And our mission, John's and mine, is really to enable web developers to break that plane, like break out of the flat design world that we've been living in for so long, to enable these truly immersive experiences. And to enable that, we really need to start with the baseline, like being able to connect immersive displays and render to them. And that's where the WebXR device API comes in. This replaces the old Web VR API, evolves those concepts to expose not just VR, but also AR functionality. And in true extensible web platform layering, this is really the underpinning only. Like this lets us connect to those devices, render displays, understand which way they're pointed, get to interact with controllers, that kind of stuff. And this is a really broad multi-year effort by a bunch of different companies, Google, Mozilla, Microsoft, Samsung, Amazon, Oculus, a whole bunch of others as well have been working on this for a while. And this has all been developed, by the way, in the W3C. In fact, we have a brand new immersive web working group. I personally co-chair this with my colleague Ada Rose Cannon, who's sitting right down here in the front. I just had to make her blush, from Samsung, and we're tasked with, like, taking this spec to an actual final status. And this is why we created a working group, because we feel it's super important to actually land this now and not just keep talking about how cool it could be in the future. This really shows the maturing of this API, because it's moving closer and closer to becoming a final standard. And of course, we also continue to incubate new ideas in the immersive web with a community group. Now, if you want to experiment with WebXR API today, you can enable it with Chrome's flags, with the about flags. If you want to try out AR scenarios, you have to enable the second flag. That should be going away soon, because we've done some new mode work in the spec to make that work, too. We have a currently running origin trial, too, if you want to try to deploy this out to normal users. And of course, if you're willing to accept the responsibility of making changes as the spec and our implementation change. Now, finally, I've mentioned the WebXR Polyfill a couple of times, and this is something I wanted to give a little more detail about. This is a Polyfill JavaScript library that's maintained by the community group. It helps developers in a couple of different ways. First, it offers a JavaScript-only implementation that works for VR scenarios in any mobile browser using orientation events. So even mobile Safari with cardboard devices or flat displays, you can actually get a WebXR implementation just through JavaScript. Secondly, if a browser does implement the older WebVR API, like Firefox did this and Microsoft Edge did this, it can actually build XR on top of that, and you get the hardware speed up of their former WebVR implementation. So you can instantly make your WebXR content accessible to a much wider range of users with just one script. Now, with that, I want to bring out my colleague, John, who's going to drill down into the augmented reality possibilities in a bit more detail. Thanks, John. Thanks, Chris. So let's talk a little bit more about augmented reality or AR. As Chris mentioned, augmented reality is largely about being able to overlay information over top of the real world. And if you've tried out AR stickers, or if you've put masks on your face on a smartphone, you've already seen augmented reality. And the reason for this is that there are hundreds of millions of phones and tablets out there right now that support augmented reality, and the number is growing. And most of those devices have web browsers, which means there is a big opportunity for web developers here. And the lowest-hanging fruit, really, is the ability to add a new experience to an existing 2D webpage. So it doesn't require an entirely new site. You can add AR capability to an existing 2D website. There have been a number of partners that have been experimenting with this using the WebXR device API and turning on the hit test flag that Chris mentioned. They're doing this in Chrome Dev and in Canary. And there's one example, Plater, which is an augmented reality platform that lets businesses put virtual objects in the real world. They've done a couple of demos, and there's some interesting ideas here. On the left, you can see that users can learn about a product by getting information in context on the product itself, rather than having to go through data sheets. And it actually saves shipping demo units to businesses that are thinking about buying machinery or heavy equipment. On the right, you can see that looking at objects from different angles has a lot of value, in this case for fashion, and generally for shopping, but also for education, where students can explore the object or artifact that they're learning about. Now, this doesn't all have to be an augmented reality. You could do both of these using a 3D model on your website, but if you can put it into the real world, you get a better sense of context as well as scale. Now, what's interesting, though, is that from a user experience perspective, there are a lot of interesting things to learn about augmented reality, particularly how it fits on the web, and it's a good reason to start experimenting with it now if you're thinking about adding it. So by way of example, West Elm, who sells home decor and furniture, did some in-store testing, and what they did was they went to one of their stores and they picked four shoppers at random, and they showed them a prototype shopping website that incorporated augmented reality. Now, this isn't a huge study, it's four shoppers, but they had some interesting findings and they gave us permission to share them. So the first thing that they learned is with these customers, the terms AR or augmented reality, it's not really a common vocabulary. Basic terminology, like view in your room, is a better way of telling users what to do. But even then, without a visual showing what's going to happen, that text can really get lost with everything else that is visual on the site. So what they're looking at now is ways to add both an icon and text so that the user has a call to action, they know what to do. One approach might be to have a rotating 3D model so that the user understands, hey, this is more than just another image on the page. The next thing they learned is that users get confused without clear directions. You can see here that there's a delay while the user is trying to figure out what to do. Do I tap? Do I move my phone around? What is this circle on the floor? I've never seen this before. You have to actually guide users to the path of success here. And West Elm is researching clear directions as well as things like progress bars and loading indicators so that users don't get lost and they can see how to get to the point that they're actually placing furniture or an object into the real world. After that, once participants successfully placed an object, the most common request was the ability to move it or spin it or even remove it from the scene. The original placement wasn't always where they wanted it to. It wasn't always clear to users how they could do that in this case. Another thing that they heard from their test subjects as well was that getting validation that the size of the virtual object matched the real world would also be helpful, because if you're shopping for furniture, you want to make sure that it fits into the space that you have. So from that perspective, showing some real world dimensions with the model could help. And finally, there was some feedback on how real the assets look. And now this study was particularly unique because West Elm was doing it in store. And so they had the opportunity to put the virtual object right next to the physical object and get real shopper feedback. Generally speaking, the feedback on realism here was only about six out of 10. And you can see clearly that there are differences between the two. So West Elm is looking at how to handle typical lighting scenarios and also make sure that the shadows underneath the object are a little more pronounced and more detailed. So the key message here really is that there's a whole lot of things that you can learn. There's a lot of streamlining going on here. And fundamentally, augmented reality on the web has some differences from apps where somebody might be installing an augmented reality app. And so they're discovering it in a completely different way. Despite those challenges, a very, I think, a telling finding from this study was that three of the four participants said they would absolutely, 10 out of 10, use AR, once they knew what that term meant, to do furniture shopping. Now, if you're like me and you grew up with commercials talking about three out of four dentists recommend this toothpaste, you probably all wondered, what does that fourth dentist think? Does it rot teeth? And the answer here is actually maybe. So three people, absolutely one person, maybe. I personally visualized that as these are people shopping for home decor. And they said, would you use this? And they're like, maybe I'm here to buy a couch. But I don't actually think that's how it went. But the point here is that three people said absolutely and pulled from random. Three people said absolutely. It's a strong signal. And it's also consistent with what we've heard from partners and users who see value and be able to visualize 3D objects, ideally, in the real world. So if you're thinking about experimenting this and you're thinking about adding this to your website, let's talk a little bit about how you can build things. Chris mentioned the flags you can enable a little bit earlier in the presentation. But let's talk about how you actually add the immersive experience. So you could write WebGL code directly. And if you're doing augmented reality, that would be over top of the camera feed. That's one way you can do it. Generally, we recommend using a library to help. For example, 3.js is a helper library for 3D graphics on the web. And it takes care of a lot of the heavy lifting for 3D geometry management and rendering. And it makes it so you don't have to work with WebGL directly. So let's look at an example of how 3D and WebXR can work together. We're not going to cover the whole process of creating a WebXR session and doing all of that. But you aren't going to look at how a virtual object can be placed into a real world scene. And we're going to do that with a narrow use case, in this case, putting a reticle onto a real world surface. For those who haven't heard that term before, a reticle is an indicator in this case that you can see as it moves around. And it's typically a user interface construct so that the user knows what they can do. You saw some of this in the West Dome screenshots earlier. So what we're going to do first in this case is take the mesh for the reticle, which in our case is a flat circle, and we'll add it to the 3D scene. It's worth noting, we like the reticle to be about half a meter wide. And so it's worth noting that the WebXR device API locks the coordinate system of the real world to the virtual world. And what that means is that one in WebGL or the virtual space, or in three, one unit is one meter in the real world. Ten units is ten meters. So in this case, if our mesh is half a unit wide, it means it'll appear as half a meter in the real world. Next, what we're going to want to do is actually render the reticle over the camera on real world geometry. But to do that, we need to know where the real world geometry is. The WebXR device API includes the capability to do a hit test, which is firing a ray into the real world and getting intersection points with real world surfaces. So for example, taking a ray from my eye down into the stage, and then getting the intersection point of where I'm looking on the stage, but also the normal facing up. So I know where the surface is and what direction it's facing. Now three, in order to do that, you need a ray. And a good example of some of the things three can help you with is a ray caster function, which I don't show here, but it will let you get the origin and the direction of the ray that you're going to fire into the scene from the camera. So we've already got that here, and what we're going to do is pass that into the hit test API with WebXR. And then what we'll do is we'll take the return value. We'll actually possibly get a list of things because it might be more than one object behind the first hit. We'll take the first one, the closest, and we'll convert the position that we got into a 3.js matrix. And then we'll use that to set the position of the reticle. And at that point, you're done. The reticle is now positioned so that it will render directly over the real world object that was detected from that ray. Now it's worth reiterating that this works because the virtual coordinate system matches the real world one. And obviously I skipped over a lot of steps here, but the point really is that you can combine frameworks like 3.js with WebXR, and it's fairly straightforward if you know 3D programming basics. But some of you might not know 3D programming basics. And the fact is that if you've tried to add a 3D model to your site, you probably know already it's not super easy. 3D models can be pretty complex, both to read and to display. We saw that even in the West Elm example, we saw that there are user experience considerations. If you're starting from scratch, you have to think about how do I want to rotate objects, allow people to move them around and so forth. And then there's also responsive design. If you want this to work on mobile and desktop, you need it to be able to, even if you're just doing a simple model and a turntable view, you have to know how to handle resizing. Do you need to display a poster image on mobile to prevent the download of the 3D geometry until the user actually wants it? For neuro-technology, like augmented reality, ideally, you'd be able to take advantage and progressively enhance and use some of the capabilities on different platforms, even if they're not available in all browsers. And as WebXR comes out and starts moving to stable, it's one more thing to learn. It's one more thing to add. So just know that if you've experimented with this before and you found it a little bit tricky, you're not alone. So to that end, the team has been looking at this problem, and we recently made public an early version of a 3D model viewer web component. This is really, really early, but it does some things today to make life a little bit easier, and the reason we released it early was to get your feedback. To give you some context for this web component, there's three things we're trying to do. The first is we want you to be able to add 3D models to your site without having to learn 3D programming or writing 3D code. The second is we want it to work well and responsibly across browsers and across different device form factors with progressive enhancement to take advantage of capabilities on browsers where they're available. And then the third goal is that as new APIs ships, such as the WebXR device API, we want the component to start taking advantage of them so that you don't have to keep up to date with all of the changes that are coming out. So like I said, it is super early, but we've made some progress, and so I want to give you a sense of what the component can do today. Let's run through a few examples, and this is the first one, the simplest use case, which is a static GLTF model. For those who haven't heard GLTF before, it is a 3D file format, and it is a required component of this model viewer because it's a format that will allow us to work across all of the different browsers. Here you can see that if we add a couple of attributes, we can bring the model to life. In this case, we set the background color and we set it to auto rotate. With the controls attribute, we also can allow the user to spin it around and move it and take a look at what's going on, but then look at it from the back or the front. We've also added poster image capability and you can delay the loading of the model so you're not consuming data on mobile if that's what you want to do. And the attributes are also dynamic, so if you add a little script which switches the poster image back and forth, you can animate it a little bit to give the user a sense that it's not just another image, it's actually a 3D model that they could click on. It works in that way similar to an image tag. The component also handles some forms of responsive design, so you can see here that it will scale up for desktop and it will scale down for mobile and it will manage the staging and the lighting and the rendering of the model properly. It can also manage multiple instances on the same page, so it will take care of WebGL from that perspective. And it uses Intersection Observer to make sure it's not burning battery and GPU when you can't actually see the model. Finally, the team is experimenting right now with some of the more progressive enhancement capabilities. In this case, you can see they're experimenting with the WebXR API and incorporating that so that you can add more attributes to turn on AR and do that across different devices. Again, it is just really early. The team is still working on more features for user interface and responsive design features to make it as easy as possible to add a 3D model to a web page. Looking forward, there's a lot we can do on realism, augmented reality use cases, interactivity. The whole reason we made this public is that we'd love you to try it out and we'd like to get your feedback in the GitHub. So if this is something you're interested in, please do go to the GitHub, take a look, try it out, and then let us know what you think. And with that, we're done. So we covered today a little bit about the WebXR device API. We talked about 3JS and we touched briefly on this new early release of the model viewer web component. If you're interested in more, this is the slide to take a picture of. The links are all on the screen. If you're watching this at home, you can rewind and you can check it out on YouTube later, I imagine as well. And with that, thank you very much. We thank you for your time.