 It's pretty rare, I think, that a tweet encapsulates a chunk of web history so nicely. But this one does. I found this while I was prepping for this presentation. This is Vlad Vakiciewicz, who helped to create WebGL at Mozilla, saying, in April 6, 2014, I've been hacking with this new Oculus virtual reality thingy. And then Brandon Jones of Google says, like, OK, interesting. Maybe I can get that working in Chrome, too. And that tweet triggered three and a half years now of what is called WebVR, virtual reality on the web. And over those three and a half years, we've seen this explosion of developers using a new breed of tools to create experiences that don't look like the classic web, but they are the web. All built on WebGL, on frameworks such as 3JS, running in the browser with all the things that we love about the web, loading instantly, you know, forkable from GitHub repo, something anyone can publish. And we've seen in the last three and a half years all these new tools emerge, tools like A-Frame, 3JS, which we already mentioned, BabylonJS, adding WebVR support, and then WYSIWYG tools such as Visor, PlayCanvas, Hologram, there's a whole new generation of tools making it fun to create virtual reality content on the web. And we now stand on the cusp of 2018, and one of the messages is I was thinking about what to talk about today was really to hammer home this message. It's been three and a half years of gestation, but this is the year that WebVR goes mainstream and this is what I mean. Virtual reality is a pretty fragmented landscape. But the one constant across all of it will be WebVR. Running from cardboard to daydream to gear VR, Oculus, Vive and Windows Mixed Reality, WebVR will run and in some cases on multiple browsers. And not just desktop mobile browsers, but a new generation of browsers designed to be used inside of virtual reality so you can put on a headset and view any website ever made, including new 360 experiences without taking your headset off. So what we want to do today is tell you a little bit about what's coming next in 2018. First we're going to hear from Steve who's going to talk to us about what it's like to make WebVR content today and to build a business around that. And then from Brandon Jones, who's working to define and implement the next version of the WebVR API. So first up, Steve. Hi, everyone. I'm Steve Thompson. I'm the founder and CEO of Pausta. We're a creative studio based in Los Angeles and London. We work with the movie studios creating ticketing destinations and showtimes and we also have a labs component which has lots of experimental stuff going on. WebVR started out in labs and we're really excited to turn it into a viable product that we actually sell to the industry. Cool. So here's an example, a video of what we're up to. We create these immersive experiences designed for the web where you can watch the trailer in 3D, which is a unique case for WebVR. You can also see the showtimes and tickets directly in front of you in 3D space. And the cinemas actually geolocate and space themselves out really nicely based on how far away they are. So it's quite a compelling experience, quite fun. And it's quite immersive. So we build these with the studios and the reason why is because we get amazing data back, we get amazing information on the consumer, but more importantly, these actually perform more successfully than our general ticketing pages. So users are more engaged. They're checking out more cinemas. They're looking at more showtimes. And that makes a really great business case. I think wherever you can find a really unique thing that only WebVR can do, that's where you can build a business off of it. So we've identified that you can watch the trailer in 3D, which normally you'd have to go to the cinema to do that. And only when you're watching a 3D movie. So that's like you already converted at that point. So it will help the 3D upsell, but it also allows for the studio to really sell the movie in an immersive environment, which is pretty nice. And so how did we do this? It's obviously a little bit different to your usual prototyping and building phases. You're designing on 2D paper for then 2D wireframes to go onto a 2D website. Whereas we did initially have a look at different methods of prototyping WebVR. And sketches and wireframes are actually really important still. And it's amazing for the UX and UI designer to dive into them. And our UX designer, Rob, he sketches it all out and does awesome things. But then when he wants to communicate that to other people, how do you jump from those wireframes, those 2D illustrations, and actually convey VR? It's definitely a challenge. And this was working on our website, trailer.com, which displays all the movies and cinemas around you. And actually one of the great ways to show your colleagues how it all works and how it looks is to jump into VR. So actually drawing inside tilt brush or, yeah, it's like an amazing way to communicate it, just jump in, sketch it out, put the headset on, and you can see it in 3D space. It was kind of a bit of an aha moment for us where we discovered how to actually do it. And it's something that you can add to your toolkit, just jumping in, trying it in real VR, and then taking it to your production afterwards. Nothing beats previewing VR than doing VR, so that's why we've done it. So yeah, that's a really quick intro to what Paus has been up to in WebVR. Hopefully you can identify in your vertical a unique business case for it and then be able to present it and pitch it to your clients and make business out of it. And now I'm going to hand over to Brandon that's going to take you through some coating stuff. So I'm Brandon Jones, and I'm the spec editor and one of the primary engineers for the WebVR API on Chrome. And I wanted to catch everybody up on where that API is at, where we hope to go, and then talk through some best practices to tie into the type of content that's stayed in his team or building. So first off, we've been running WebVR in an origin trial for a little while. And this has proven to be a massively helpful thing for us as the developers of the API, because we've gotten incredible feedback from the community, both individual developers and larger studios, about what's working, what's not. We also kind of get some feedback on the side about seeing where developers are doing things that we'd rather not have them be doing and designing ways to then go and discourage that kind of behavior. And so we've been able to collect all of this information together and we've been using it to build what we refer to internally as the WebVR 2.0 API. Now eventually, when this does come out, it's not going to be 2.0 product, it will be just WebVR, but that's how we disambiguate it internally. And the big focus here is that we're going to try and enshrine some of the best practices that we've found through all of the origin trials that we've been running and just bake them into the API and make it wherever possible impossible to do the wrong thing. We're going to be hopefully starting the origin trials for that version of the API in early 2018. And while there will be a little bit of overlap to allow for developers to transition their titles from one to the other, or their pages, we'll be retiring the current version of the API sometime after that. It's available if you want to see what the API looks like. It's available as an explainer today and we have a spec in progress. And the nice thing is that every browser vendor who has shipped a WebVR 1.1 implementation in the current version has committed to transitioning to the new version at some point. Now, when I talked about enshrining some of the best practices into there, I wanted to go through over a couple of examples of what we mean by that, the first of which being what we call Magic Window. Now, Magic Window is kind of the staple of content like Stace team was doing, where it allows people who don't have the VR hardware or simply don't want to use it right now to still peek into that virtual world and see what kind of content they could see if they bothered to get their headset out or had the hardware available. But in the first version of the API, this was kind of just a side effect. It wasn't really something that we designed for specifically. And as a result, it was kind of easy to get wrong. So in the new version of the API, we're making this a first class citizen, allowing it to use the same render loop as the normal VR content and really trying to drive home the fact that you can build the content once. It will work in VR and in Magic Window. And we really want to encourage people to do this. Tying into that in order to allow us to use a single render loop for both VR content and that Magic Window style content, we're switching over to what we call a prescriptive rendering model. Whereas in the old API, you would generally draw a WebGL frame and then tell the browser how you drew it. In the new version of the API, we're going to have it tell you what's required for drawing the frame and then you do it. This sounds more restricting, but in the end, it will actually allow the browser to make much more intelligent choices about how your content is shown and automatically increase the number of devices that you're included on. If you have a stereo display that comes out in the future, that's not necessarily VR, but could still show 3D imagery, we have the potential under this model to automatically upgrade your content to support it. And then finally, one of the things that we heard a lot of feedback from was input. We were previously exposing everything through the GamePad API and found that a lot of developers were reinventing the same wheels around that API to try and make the input a little bit more palatable. One of the most common things that we would see is people doing what we refer to as ray-based input, where depending on if you had a actual controller or gaze-based input or something like that, everybody was having a system where you would have a pick ray and then some basic button to actually select the thing at the end. So we're taking the models like that that we saw everybody doing and actually enshrining them into the API and making it something that just works by default. Here's a quick example of how that particular item would look, where we have a single select event that then would come in and you'll be able to get some information about the ray that is actually projecting off into the scene and use that to pick geometry out of your scene. And this would work across both high-end and low-end VR hardware and Magic Window. Now, anytime we change APIs, it's a difficult thing to get people to migrate. So we're doing whatever we can to help that migration be easier. There's going to be a period of overlap, as I said before, between the two APIs so that you can test both versions at the same time and make sure that you're getting content parodies you go across. We're working with Mozilla to develop a polyfill that will not only allow content to work on devices with no support for the API, they'll be able to use your phone's accelerometer to create kind of a fake best effort implementation on devices like iOS. But we'll also help map from the old API to the new and vice versa so you can use it as a method of easily porting over content. And then we're going to be working directly with libraries such as A-Frame and 3.js to ensure that they have support right out of the box. Now, there's a couple of best practices that we found that we can't really encode into the API itself, but we do want to encourage developers to continue using. One of those is splash screens. Now, this is something that Chrome is actually going to sort of force people to do because what we don't want to have happen is for people to say, hey, I want to view this VR content, put on the headset, and then just stare at the black void for multiple seconds. So we're going to have a requirement that says, once you go into VR, you have to show the user something, anything, within about five seconds. And the best way to do that is to have a static splash screen as it's shown in this animation here. Now, the splash screen itself doesn't have to be anything fancy. It can just be a single image that you draw one time and then just let the system automatically re-project it. If you draw the frame once, the head tracking within most VR systems will allow it to stay stable in spaces as users looking around. And this gives you a little bit of leeway where the user can actually see something while you're doing heavier weight loading in the background. Next, we really, really want to encourage people to design your experiences around the idea of, or around the limitations of the mobile platform. Certainly, with WebVR, we're going to have access to both the highest of the high end, the Oculus Rifts, and the HTC vibes of the world, and then the low end with hard board and everything else in between. But if you want that right once run everywhere magic that the web promises, you really do need to focus on developing a great experience for mobile first and then ensure that you can spiffy it up a little bit for the high end. This seems kind of obvious, but we do see developers who build something on desktop and then despair when they've only left a week behind to actually optimize for mobile. So we want to encourage people to get away from that. And then finally, we want to really encourage people to think of VR not as the main course for any given web page, but as kind of the dessert, the thing that you really want people to go the extra mile to see, but most people will not see it that way. The fact is that there may be a few million VR devices in the world, but billions of people can still reach your page through standard web browsers. And we don't want to hang a sign on the door that says no VR, no service. So by relying on the magic window modality and allowing people to see and interact with these virtual scenes through their normal screen, you get to invite that much larger audience in. And then for the lucky few who actually have the appropriate hardware, you can give them the best experience possible. A few other updates for web VR as far as the ecosystem goes. One thing that I know a lot of people are going to be excited about is we are bringing desktop support to Chrome, not the little experimental versions that have been floating around on the web for a while, but actual stable Chrome. That'll be coming in origin trial form in early part of 2018, probably roughly in coinciding with the 2.0 API being available. And then the other thing is that we're making web VR experiences accessible on the Daydream home screen through what are called discovery tiles. And they just transitioned off there. But those are the large tiles that if you're familiar with the Daydream, those ones. You can swipe through them, and it highlights new experiences and interesting experiences that are available on the platform. We're enabling developers of web VR content to highlight their experiences there and link directly into them. A couple of other things I want to highlight really quickly is that we also have some great support libraries from Google, such as Songbird, which is a high quality, spatialized audio library. It boils down your spatialized audio into an ambisonic audio stream that takes into account room modeling and all sorts of other cool audio buzzwords. And Draco, which is a 3D geometry pression library that does a spectacular job of shrinking down the amount of data that you actually need to send over to the user's system. I've heard the number 20x smaller meshes being thrown around in relation to that, so it's definitely something worth checking out. Now, VR is great, but we also don't want to ignore the upcoming trends in augmented reality. And so to talk to you about that, I'm going to pass it over to Josh Carpenter again. Thank you. Because when one thing hits the mainstream, you get bored and start working on the next thing. And we've talked mostly about virtual reality, and virtual reality is about transporting you to a different world, feeling like you're 10,000 feet up and getting sweaty palms. So then what's augmented reality about? Well, augmented reality is about bringing the digital into the real world. So you have a seamless blend of the two. And right now, if you've been paying attention to anything in technology, you know that you can talk to anybody and get a different opinion on when we're going to get to walking down the street with a pair of glasses and the inability to distinguish the real from the virtual. But I think it's safe to say a couple of things. As you're walking down that street, sometime in the future, you're going to be seeing a sea of information all around you. Tons and tons and tons of information, billions of nodes of content. And as you walk down that street, that content is going to come from a ton of sources, tons of very different origins. As you walk by the subway, you want to get the information for the subway just in time, not by pre-installing an application. You just want to get it serendipitously and instantly. And with all those different origins, you want that to be safe and secure. And you want your privacy to be maintained. If you're driving down the street, you don't want someone to put a billboard in the middle of the freeway in front of you and cause an accident, for example. Sounds silly, but I don't think it will be. And lastly, not everyone's going to have these fancy glasses. And this information is important. If it's the subway or the airport or the hospital, that information has to be accessible by anybody who walks through the front doors, whether they have fancy glasses or they have an old iPhone. Now call me crazy, but this sounds a lot to me like the web. And if you're sitting in this room, I'm going to assume that you probably wish the web had been a little bit stronger in mobile. I mean, we've done a lot of work as a community, all different browser vendors, everyone who builds tools, since 2007 and before, to make the web amazing on mobile. But now I've been there, it felt like trench warfare at times. You know, wouldn't it be nice to get ahead of the next big thing? We believe that that is our imperative. If we care about the web and if we think the web is going to be an amazing fit, maybe the fit for the future of augmented reality computing. So we have to start now. And in August over the summer, smartphone AR had a moment. We've been waiting for glasses and it turns out the beachhead was in our pockets. And by this time next year, it seems like we're going to have hundreds of millions of devices in our pockets, small supercomputers, that can do entry-level augmented reality. So motion tracking, composing the virtual in the real world, exposing basic features of the world, such as like horizontal planes. And when we saw that, we wanted to make sure the web was there from day one. And so in August, Google announces ARCore. ARCore is designed to bring smartphone augmented reality to Android, all those Android devices out there. And the team that I work with inside of Daydream worked to create prototype browsers that are built on experimental pre-standards APIs to expose the power of ARCore, but not just ARCore, also ARKit on iOS, to web developers. So we can begin to explore what the use cases and what the prototypes are of this new computing platform to inform future standards, to inform future APIs. And we release these open source, and you can get them at the URL at the bottom there. Release them with a 3GS library. So a lot of the experimental bits are abstracted away from you, and hopefully you can continue to use that 3GS library in the future once real standards emerge. And they expose all that stuff I mentioned before, like six degrees of freedom, positional tracking of the device in space, positional and trational tracking, rendering the camera, exposing horizontal planes. And what's really cool, I think, about web AR, and I'm a UX engineer for background, and so I'm not a software engineer of any means. I kind of pack my way through 3GS, but I've done a lot of web VR now. And I was able to pick up the web AR tools made by my colleagues and begin to start creating content the same day, because web AR is built in large part on web VR, including a lot of the same tools. So these are some of the things we've been creating in-house. The hello world of AR right now seems to be six-to-off graffiti, playing around with 2D to VR to AR controls, playing around with seamless transitions between states, and also CSS3D transforms. You don't need to use WebGL, which I'm really excited about. We've also seen the developer community respond to this. Again, taking tools like React VR and then putting them into these AR browsers or friends at Archaeologic, who you may have seen demoing today, exploring what it looks like to purchase furniture by drawing a space and then telling it what style you want and having that space fill up with that furniture. So there's just one rub right now, which is that these are an experimental browser's web AR and AR kit, which is a great name. Requires you to build from source using Xcode. We're not gonna see a lot of uptake so long as that's true, so we need to get to real standards. So the next step for that is we're going to publish an explainer, we being not just Google, but Google and the other key browser vendors, publishing explainers soon, talking about what is AR, what is intersection of AR and the web, and how can we move forward together. I wanna call out some really amazing collaboration between the browser vendors to make this happen. It's really, really exciting to me. Having been there for the duration of web VR, the degree to which web AR is getting traction from the get-go is pretty mind-boggling. It's very exciting. And then second here, and this is I think where we all come in, more experiments, because to make those eventual standards better, we need to build things, try things, learn from each other, and so I would encourage you to start creating and experimenting with the tools that are out there. Hit us up on Twitter. Brandon and I are very available, we love to talk, we love to see what you're making, and let's build the next generation of the web starting now. So thank you very much.