 My name is Andres Cuervo and the long-winded title of my talk is J.S. in the virtual and augmented reality ecosystem. And the reason I named it that is because I'm going to be talking about a lot of stuff today. But before I get into it, I'm going to talk a little bit about who I am. I'm an artist, an engineer, a software developer. Right now I'm an AR engineer at a marketing technology company called Moveable Inc. And we are building a web AR platform. I'm not going to be talking about that specifically today, but you can come find me afterward if you want to know more. And before I talk about sort of the place that WebXR has and VR and AR have, I want to talk a little bit about the web platform as a whole. So how many of you have heard this phrase before, the web platform? Great. Okay, so that's like half. Yeah, so this is a pretty amorphous term. The most reliable source on the internet, Wikipedia, says that the web platform is a collection of technologies developed as open standards by the W3C and other standard bodies like OpenJS, like ECMA and so on. And that's sort of one way of looking at it. Another way of looking at it is if you go to platform.html5.org, you see this giant checklist and it's rather overwhelming of all the various specs that can possibly fit into a browser. And this is another way of thinking about it. But the thing that's important to remember about this concept of the web as a platform, the web platform, is that all these check boxes are meant to enable users or developers to create new things or have new experiences. And so when we talk about introducing WebXR to the web platform, like the low level implementation is this thing called the WebXR device API. But what we actually want to enable are things like what's on the slide right here. And if this doesn't really make sense to you, I'll go through some more examples for the rest of this talk. But in general, the breadth of XR is meant to cover everything from augmented reality to virtual reality to the normal sort of 2D web that we've been programming for decades now. So at its most basic level, something you can already do today with the Canvas API or 3JS is embed a 3D scene onto a page. When we talk about WebXR, we're talking about, if we're talking about an AR context, then we mean something like grabbing the camera and telling you where the floor is or telling you being the developer, telling you, giving you back a mesh of geometry and allowing you to place here a green cube, but anything into the real world. When we talk about VR, basically, it's all of that stuff minus the camera and minus the real world. And while these look like really disparate use cases under the hood, they can sort of be unified. And that's what the WebXR API is trying to do. So the WebXR API is relatively recent. And so there are a bunch of tools to get started with this work. So I'm going to go over sort of what the state of the art has been traditionally and the sort of the most high fidelity experience, at least for prototyping with WebAR for the last couple of years has been using custom browsers. So because the WebXR device API hasn't shipped in most browsers, and it wasn't even standardized until February of this year, the easiest way to get started sort of hacking around with those APIs was actually to use a few different custom browsers, one from Google called WebAR on AR kit slash or WebAR on AR core. It doesn't really roll off the tongue. Basically, all it's doing is taking those proprietary AR APIs, so AR kit and AR core on iOS and Android, respectively, and fusing those to an instance of a web page. So this isn't Safari. This isn't Chrome. Well, this is Safari on iOS. But the point is like this isn't a shipped browser, like you would have this link goes to a GitHub page, and you would have to download that and manually build a custom app in Xcode and then get that onto your phone in order to get this running. But it's really interesting because you're able to actually use WebGL and JavaScript, and it has access to things like the plane or to things like hyper accurate device orientation. And the other option that's a little bit easier to get started with is the WebXR viewer, which is basically the same thing but provided from Mozilla. And they actually put this onto the app store because they realized that getting like not everyone either has an iPhone but doesn't have a Mac or it's just not everyone is comfortable building a custom iOS app randomly off of GitHub. And so they provide a similar thing. And those are two sort of like custom browsers. For a long time VR was the same way where you would have to fork, you would have to download a custom fork of the web of Google Chrome or Firefox in order to run WebVR before that was standardized. But now we have a bunch more options. And so the rest of the tools in here are going to be things that can actually run in regular browsers. So the first and sort of most common one in my mind is 3JS. For a long time 3JS was sort of the default way to do any 3D onto a normal 2D inside of a normal 2D web page. All these examples are actually the WebGL context running on a VR device. So you would navigate, you would be in your VR headset. You would navigate to the 3JS examples website and you could click a button and say enter VR. And that would that would automatically move you into one of these experiences. And that's that's sort of where we're at right now. And you can do really basic things. But you can do a lot with these really basic things like position and hand orientation and stuff like that. The other option that is actually built off of 3JS is this library called A-Frame. And whereas 3JS itself is a proper JavaScript library, A-Frame has a completely different goal. A-Frame is in was designed explicitly for people who only have familiarity with HTML and CSS and don't and are possibly intimidated or just don't want to deal with WebGL or disparate 3D contexts. So in order to understand A-Frame a little bit, this slide shows you all of the code for the scene that's going to run in the next slide. So here you can see in this is a normal body and then everything inside of here is A-Frame code. It all looks like HTML. In fact, this library precedes the Web Component spec but it was modeled off of Web Components. So everything has a dash in it and everything starts with A-. A-scene is the top level and that sort of gives you your 3D environment. And then anything you put inside of here that A-Frame recognizes it will actually create for you in a full 3D world. And to see what that actually looks like. So this is an A-Frame and right now I'm dragging around with my cursor and I can actually press either with my arrow keys or my WASD keys. I can move really naturally in here. If I was on a mobile device, this button at the bottom right would allow me to enter Google Cardboard View. If I was hooked up to a VR machine, it would give me the VR instance that we saw earlier. And just to prove out that this is actually what's running. You can even in real time edit the scene from the inspector. These are just normal HTML elements that are bound to the WebGL context. And so that's a really easy way to get started. It handles almost all the complexity for you. Oh, that was in case it didn't work. So one other popular library that's been around for a very long time is ARJS. So when I showed you WebAR on AR Core and KIT, that was taking advantage of what we call MarkerListAR, basically true AR. You can point it at anything and it'll give you back a depth estimation. This is marker-based AR. So the way that this works is you basically have a computer vision system that is looking for a specific marker. In this case, it's looking for the word hero inside of a big black hollow square. And then it uses 3JS to sort of superimpose that on top of the marker. And it can tell that the one marker is distorted and it will distort the 3JS scene above it. And this is a really easy way to do, sort of to prototype image detection interactions and sort of like more product-focused sort of vision-based AR experiences. And this runs just in a normal browser. One other tool is ModelViewer, which is itself a proper web component that Google provides. And it's relatively new. I think it came out at the end of last year, or at least it was open sourced at the end of last year. And again, it is just HTML with a script import tag. And you can see on the right here, this is sort of what you see if you're on a desktop view. But if you're on a phone that either is an Android phone and provides this API called SceneViewer or if you're on an iPhone and it provides this thing called QuickLook, ModelViewer knows about it. It knows how to call out to it. And so similar to A-Frame, but for a very specific use case. If you pass it a 3D model, it will allow you to preview it in AR, which is what you see over here on the very right. This is just like a direct screen grab from my phone. One other tool is, this is the second to last tool, is TensorFlow.js. There were a few TensorFlow.js people at this conference. They gave a talk earlier today. And there's a lot of low-level stuff you can do with machine learning on the web. But one thing that they provide is this higher-level sort of ready-made model that they've trained on a bunch of different samples of bodies. And what they actually provide back is not just a skeleton, but a way, you can see over here on the right, a way to map out like the purple is the head, the green, the light green is the torso. You could theoretically sort of build a whole mask over these people in WebGL. You could add 3D models onto them. You could do a bunch of interesting things that all begin to sound exactly like AR, sort of like what five years ago would have been like very hard to do industrial AR. And speaking of industrial AR, there is this one other tool set that is sort of web adjacent. So there's this iOS prototyping tool called Torch. And Torch is designed to very quickly get you started with AR. It's a no-code tool. But it actually just recently, like last month, introduced an export option. And they partnered with a company called Aethwall. Aethwall is a web AR company. So under the hood, they take all of their iOS prototyping code, transfer it over to Aethwall, and then you can view it in a normal browser. You can see here this is running in a version of iPad Safari. And so yeah, that was a very long sort of laundry list of tools. And these slides are already up. So if you're curious about any of them and digging into them, you can, I've linked to all the relevant resources. The last thing I want to talk about is sort of the future of what all of this work is heading towards. So there's just on Tuesday a couple things happened. This company that is an AR headset, one of the biggest AR headset companies, Magic Leap, they produced a series of web tutorials and released them as sort of both an open source blog post and as projects on a platform called Glitch. Glitch is a really easy way to sort of like share and remix code. It's similar to like CodePen if you've ever used that. But it has a little bit more. It gives you a full Node.js environment. Anyway, seeing like big companies like Magic Leap or Microsoft with HoloLens sort of release web-based tooling is really interesting. And it means that there is industry support for all of this. Something that is very nascent is this website called immersiveweb.dev, which is actually run by, this is maintained by the W3C, a few folks on the W3C immersive web group. And this sort of gives you a nice starting place for figuring out what are all the requirements, how do you do VR and AR on phones, how do you do VR and AR on desktops, et cetera, et cetera, et cetera. And it also, at the very bottom of this page, provides you with examples with Babylon, AFrame, 3js, a bunch of the tools that I mentioned today. And the final thing, the final really exciting thing that happened on Tuesday was that Chrome was the first browser to ship the WebXR API not behind a feature flag. So you don't have to turn anything on as of Chrome 79, I'm pretty sure. It just works. And this means that because a lot of VR browsers are actually built on top of Chromium, this means that a lot of VR browsers, when the next time that they update, are going to get a lot of the features, are going to get these features for free. And this sort of, this is just the first step, right, like next we need every browser to implement it, but everyone is on board, the standard has been actually drafted. And so this work is like, I'm not going to say nearing completion, it's a very big milestone. And then the last thing is, I know I threw a lot of random resources at your face, and there's a lot of overhead here. But because there is so much work to be done, and so much work actively being done, the last thing I want to say about the future of XR is that it probably starts with all of you. It's a small space right now, but that means that there's a lot of low hanging fruit for open source contributors, for people to do work getting JavaScript ready for this immersive future with all the tools that I talked about. Yeah. And this QR code goes to the link for these slides. So yeah, that's all I have.