 Hi, everyone. I'm going to start with a little bit about with mixed reality and who I am and why the session is needed, why you might want to hang around in this session. Then I'm going to go a little bit into the technology, what makes with mixed reality, how you actually build a little bit of the application, so a little bit of life-coding, nothing too much, just to get going to what you need to build if you want to right now within this conference, and then the considerations we need to make to test these applications. What are the present challenges? Why the present methodologies or almost everything we have been talking last two days, they don't really scale for this kind of application, and then we go for the open, still the open research problems. There is one consideration I want to make beforehand the session. I'm mostly going to talk about with mixed reality applications, so everything I'm going to talk about will be running in your browser. So your augmented and virtual reality applications, which you can build on your browser, I will talk about how you can test them. Some of the things or some of the challenges I'm going to talk about can and are applicable to even native VR or AR applications, but the specific methodology of testing or the tools I'm going to talk about today might not scale to the same extent to them. So with that, let me share my slides. So today we're going to talk about mixed reality applications. My name is Rabimba Karanjai. I have worked with Mozilla mixed reality team. I'm also a Mozilla tech speaker. I'm also a Google developer expert for web technologies. You can tweet any queries related to this slide or related to any query you might have for mixed reality applications, not only testing, but anything. You might tweet in these or you might even email me later. So who am I? This is in a nutshell me. I am a PhD student at Rice University. I have worked a part of my research and also as a volunteer contributor and also as a research assistant with Mozilla. I have worked mostly with the mixed reality team, but also other teams like the deep speech team and Firefox OS team in the past. I have also worked with IBM Research, both in the TGA Watson Research Center and Elmaden on different problems. So starting with a little bit of history, what is augmented and virtual reality and how are we in this situation of building that? So this concept is really not new. Augmented virtual reality. So the picture you can see here is even Sutherland. That's from 1960s in his MIT lab where he first envisioned what VR can look like. So there's a whole room contraption where you can walk around the room and it tries to mimic the features of the room in VR. Coming back fast forward today, we have two components for mixed reality applications. It's not only what you see, but it's also the inputs you get. So this is the input component. You get outside world inputs. So those can be wireless input devices, your GPS antenna, anything which gets you awareness of the world around you, which can map the world around you, and not only mapping, which gets any kind of sensory input from around you, can and will probably be somehow implemented inside a VR or AR scene. So you need to consider that as well. Now comes the other part of VR and AR, the consumption. How do you consume these contents? There is no single answer to that. In this picture, you see like a few different modes of what people kind of try to envision what VR or AR might be. So we have something like half-life or second life, what they thought would be VR. So those are like animated models. On the right, you have right upper side, you have mobile devices, which is mostly the prevalent and popular platform to experience augmented reality for now. On the bottom right, you have Microsoft HoloLens. What they thought HoloLens would be how it would interact with the real world. And also you have on the left of how a virtual reality social interaction might look like. These all interactions have their own certain limitations and considerations we need to take when we want to test this kind of interactions. That brings us to the devices where we consume it. This slide is literally a nice presentation of showing how difficult and varied the whole scenario is. From the left upper, sorry, from the left upper up, you have something like Google Cardboard, which is literally a $3 or like almost, you can just download a Cardboard Cutout from Google Free and make it on your home. That's like $3 or like a 180 rupees diva base. On the right side bottom, you have something like a HTC Vive, which costs you around a bound of like $600 or more. And in between you have a lot of other devices. You have Google Daydream, you have Samsung DRVR, you have Oculus Quest, and you have PlayStation VR. These all are of different price points, but more importantly, they give you very different experience. So the experience you can expect from them are different and what the applications are built onto that have like considered those and they are also different. So when you have somebody or when you have, you are maybe building an application or somebody has an application you want to test, you need to consider the devices, why it will run, and build your kind of benchmarking suite based on that, how you want to test it and what are the parameters you want to test it on. So starting a little bit about web VR and why web. So before I go to this slide, how many of you have tried some kind of virtual reality experience before? Maybe in your home or office, any kind. So out of this all six devices, maybe Google Cardboard or something. So I think the thumbs up will be in the nice indicator if you have tried something. So out of 44, almost 35 of you, that's very good number. So yeah, that's awesome. So that brings me back to this slide. When you had to try it, mostly you have probably tried a game or something like that, which you needed to install the device you have been trying. So if you have been using Oculus, probably you used some of the previous generation Oculus so it had to be a third to a very high-end desktop which ran the VR experience and then you consume it with the Oculus. Same with Vine, with room scale virtual reality that is kind of what you have right now unless you have one of the new Oculus Quest which are probably still hard to get by here. Now, all those, the single point of barrier here is that you have to download the game or you have to download the experience. So whenever I want to show you something, you go to that platform store, you download it, wait for some time to get it downloaded, install it and watch it. So first it has to be platform specific. It has to run in Windows or Mac or Linux. So right now even most of the Oculus devices they will run in Windows specific. So your app probably is published in Steam Store or somewhere like that. So there is platform dependency. Then you have to download that and it's not instant. This is where VR comes in. When I wanted to build an app, I don't need to go to Unity or anywhere. I'm just a web developer. So I just want to build something quickly and for the client maybe. And what I can do, I'll show here that with just a few lines of code I can put our VR experience and even mixed reality experience right in the browser which will run in any device which has a compatible browser which includes all of your mobile phones. So every demo you see today you will be able to experience right now today. And you can just code it in any online coding even VS code or even like code pen and you can experience that. So it's instant, it's open. So you can just toss the link to somebody they can experience it and they don't have to wait. So it's instantly connected. That is the kind of crime USB of using web VR and web XR technology. Now, coming to where this will work, which device or which technology will it work? As you can see from this slide it works from almost all of the browsers. So Firefox, Edge, Chromium, Chrome for Android, Oculus Camera, Samsung Internet, everywhere it works. One glaring omission here is Safari and Safari in iOS. So Apple has been slow to kind of pick up most of the leading edge technologies same for web VR. But the good news is Apple has committed to kind of implement that in Safari so we will soon see that in Safari in near future. But until we wait for that you can use mobile polyfill to kind of have the same experience in Safari. There will be performance impacts, but it still will work. So with that a little bit into the framework people normally use to build these applications and there are a lot of things but I'm gonna talk about the easiest one to do. So I'm gonna talk about A-Frame. So A-Frame is a JavaScript framework with Mozilla built and this is built on top of 3JS. 3JS is a webGL library some of you probably have used it. How many of you have used 3JS or A-Frame in past? So yeah, not many. So let me show you how I can do like a hello world of web VR applications. So these are literally the five or six lines of code you need to build a VR scene. Let me actually show you this in real time how you can do it. So I'm gonna share a different scene right now and we are gonna come back. So this is a sample code pen project and if you see the code there are no CSS, no JavaScripts and this is literally what I showed you in the slide. So you just declare a scene called a scene and you have five components here. Those five components built this scene and this is a live VR scene. If you have a VR compatible browser you can actually just click on this icon and you will be inside a VR scene. So right now, if I plug in Oculus or HTC Vive to my laptop this icon will show that here it has a support or like if you use this link to view it in mobile it will have support and you can be inside a VR scene. So this is a live scene. I can walk around it and it works. Let me see how easy it is to change something. So if I change the radius here to something like this you see the sphere got bigger and if I want to change the cylinder site maybe to something a lot longer. As you can see this changes instantly. This is just a HTML page with JavaScript. The browser APIs expose them to your VR application. So you can write these, you don't have to worry about anything and it will run as a VR application in your browser and people will be able to experience that. So today we are gonna talk about how you can kind of paste these in browsers and different parameters of what you want. So coming back to the slide, this is kind of the hello world of how you build a VR application in your browser and it will work right out of the box. But that was a very simple example of VR applications. Let's see what else people are building so that we know what kind of experience we need to test for. So since this is a web application this works very well with all the web libraries out there. So you can use D3, React, Angular or your favorite choice of library with that. You can even have TensorFlow.js running in the background and have machine learning goodies thrown in. We will see one example and the challenges we might face in those kind of situations for testing. So with that some of the examples which are already built. So this is a 3D painting application. You go inside a VR scene and you kind of paint something in three dimension. Once you are done painting you can throw out the URL to somebody and they can go inside and see this in real time in three dimension. The things you need to test or we needed to test here is that what are the responsiveness? So one of the primary things you will need to be aware when you are building a VR application is that it has to always have more than 90 frames per second performance. Otherwise your audience will start feeling nauseous. So any VR app you are testing it has to incorporate that element. Other than that you have to test responsiveness. How do you quantify responsiveness? We'll talk about that. The other things people, the other aspect people have been building is that having 360 degree videos inside a VR scene. So this is a scene built by Amnesty International where they took a 360 degree video of a bomb exploding like a normal video of bomb exploding and then a 360 degree kind of image, panorama image of the aftermath of it. So what this invokes is that when your audience is actually inside the scene they get to experience all of this. How do you ensure that this image is getting like shown in exactly the way you want and not some portions are getting cut off or getting wrongly stitched in your VR application. Then comes the visualization part. Since I showed D3 you can build something like this for maybe showcasing a lot of data into three-dimensional where people can go and play with it. This allows you to have way more data than you might be able to put in a 2D scene. So these are the things you need to kind of have where you need to test. So mixed reality is kind of extending the support on VR. So the other things which will come into mixed reality are computer vision and geospatial awareness. And one of the things you will have is have the webby approach. By webby approach I meant that you have to be aware of the privacy. So some of the times you will have requirement that this will run in a corporate environment and even though it's a web application the data it's collecting should not be going outside. And you have to test for all these that is this like privacy aware not only the UI and UX portion of it. And this is a particularly hard example to test. We'll talk about this. So what this is doing is that it's trying to guess your hand movement and whatever you are drawing in a VR scene it's trying to guess what object it is querying a database which for this one is Google Poly and getting the relevant models inside your scene. For these kinds of things what you need to test is that if you are actually getting what it's drawing in these scenarios you will need to have even control of in the back end that okay what is going on and if in the scene where it's drawing is it getting implemented right there. What we need to test. So we need to have some kind of way some kind of automated way to execute any application deterministically without modification. So every time I need to test an application I need to be sure that my testing framework is not modifying something and I can reproduce the result perfectly every time. I need to compare the performance and rendered frames. That means that a scene when it has when it's I'm just going inside the scene it has less number of objects inside the scene by objects I mean 3D models. Next you just walk around maybe in a castle somewhere that castle is a very big 3D model and with that your performance will vary. We need to have some kind of way to compare the performance how much the rendering frames are getting affected and with which object we have. It has to be multiple form. It just won't cut it if we test it on our desktop because your audience is not going to experience this in desktop. So we need to have some kind of way to connect multiple headsets and test on them. Preferably if we can do it simultaneously that makes our life easier. Not only that, we need to have cross browser testing. This kind of takes out the concept of what we used to do using Selenium or Puppet or anything that okay we spawn of different instances. We test in them because now it's not just testing in a browser on a VM or machine. It has to be on those specific devices. So we need to have a way to support that and maybe have some kind of command line too which does this which we can plug into our existing frameworks which we have in ready which is stable and automate them. And of course in a nice way to show them. With that, what we have is some kind of like this. So this is kind of the tool where we came up with. So you have WebExpress samples and it will show you the frames. It will also show you the auto centers. And if I want to just test devices I just see that how many devices are actually connected to my device. So this one example, I have Quest, Pico and the other device connected and I need to see how many browsers we have. So this kind of shows that which browser in which device it's right now it's not running like in any of the devices not running any browser. So this literally tells you that okay we want to run this test on every browser we have and then we have this output. So now you'll see every device which is connected to my device. So laptop, it runs the same test on all of them simultaneously. It's locks the output. And then when I want to see the summary result I can kind of group them by devices I can see how they perform. So we can see for each devices how much they differ in frame rate how much they differ in starter events the render time, the loading time and we have like a bunch of different things we need to test, we can test. So those if you want to kind of visualize it we can have like a nice HTML output from the tool and it kind of shows you different the frame per second rate. So as you can see some of these devices for example, Quest had pretty bad frame rate and Pico has good enough frame rate but not always. So and we can kind of like compare with other devices and this kind of thing. Some of the other things is that we can choose how this interacts and we can replay this recorded session later so that you can kind of record every session and save that as a reference image, play with it. And once you are done with that you can actually test that if those two literally matches with each other and this is actually using 3JS to done that and you can see that okay, if it passes or not if those are exactly identical or not these things are very hard or almost impossible to predictably do when you are doing this by eye but using automated tools, these are easier. Keystrokes are something else you need to keep in mind so you need to have a way to actually see that if your keystrokes are getting passed to your keystrokes are getting passed to your VR scene and predictably catch them to see that okay if you are getting the keystrokes you're supposed to send. You need to also have how people interact and how mouse visibility works. For example, right now the first one it just recorded the mouse and if I run it, it automatically plays that back. We can define the frame rates and how to play it back and we can define it okay. So the mouse movement, so the mouse movement there the real mouse is on the left and this one is the replay of the mouse movement. So one of the things you will come to realize is that we need to have kind of a included parameter when launching browser activities using ADB and we need to disable gestures. So a major issue we had is that depending on the placement of your controller when you are running the demo they could be so when you're running the promo they could be close to the camera. So that affects the rendering and also the recasting of the objects and that actually affects the overall performance. To fix this you need to hook it up with the WebXL API to return specific controller pose what this demo like this scene shows you that even if I change the real controller the tool actually points it to a specific position and for your testing purposes it will fix it that. You could programmatically make the controllers disappear move around programmatically and also record movements. So those are the some things you will be able to do. Similar kind of challenge comes when you have the headset because the headset is kind of what gives the scene the inputs of where you are looking so your head movement and everything. So here this tool actually implements a custom API and injects them so that instead of a real headset it gets the emulated response and you can programmatically control that. It gives you an ability to write your own custom like testing tools without somebody actually having to take the like wire the headset and do all those things. This natively injects with the API in browser level. So this should be agnostic with most of the devices out there. Some other thing you can do with the tool is that you can output these informations in your system browser. So you have kind of like a database where you output this and you can play and slice and dice with the data later if you want. So this is kind of like what this tool is capable of. I will talk about how you can get access to that. So this you can just send to your team server for this one. This is going to the Mixed Reality Server and everybody can kind of have access to that. And here you can select specific instance which gives you control over what you can do. So here it just puts the example and we can run the instance and sample of test. So from the database it's taking that and running into the same exact replay you have for different devices. So here you have some kind of recorded test. You go to the database and you run it in all of those instances. And now you can see that how much time it took and you can compare it with previous results. The test results are stored linked to like specific devices if you have multiple devices to test and a database where you need to store it. So what we accomplished with this? So this is the very early look at what are kind of the testing tools you might need. When we started working, so when I started working as a assistant back in 2017 we didn't even have a good way to benchmark our own applications. And mostly it was done using different extension like the internal extension hacks we did and those things. Eventually this is kind of the tools we probably will need to run. This tool is not conceptually new. The concept comes from much older tool, almost three years older tool done by Mozilla Games Team which doesn't exist anymore. But the concept and a lot of the ideas come from something. The tool itself is EMU unit test. And if you go to, it haven't searched with Mozilla Games and EMU unit test, you will be able to get the code for the previous one. This one takes a lot of concept from that and deviates to kind of implement those concepts in VR domain. But the Mozilla Games unit test is something is more generic, it's more applicable to any kind of 3D application, not only VR. So what this ensures is that we start as a processor by an immediate custom code in application that you want to run. What this enables us to do is that we devise agnostic and also build platform agnostic. So you don't need to worry about which specific operating system or platform you're testing on and which device you're testing on. So theoretically this should be predictably tested on any device that is even going to come and which supports the WebXR standard API. We ensure determinism. So the code is injected in every test hook using API calls. So you get things like request animation frame, performance, math random, date now, and all most of the WebXR API goodies bake into it. So for example, you saw the locking the headset position, programmatically controlling it and controlling the pose. This kind of gives you ability to get cases or scenarios from your client or the developer or your team where things you might want to test on for your performance and responsiveness and programmatically do that inside the application and automate those scenarios. This also lets you using the database to run it in a periodic manner and you can integrate that part of your continuous integration server that you can have some kind of deterministic benchmark after if we push after every change, you can see that okay, if we are hitting the same exact performance or some change broke something and performance is going down. So you can have that pipeline built directly on this. The ability to input records and replay gives you that ability to deterministically record your scenes and we are controller also and you can actually record something a real user might do and replay that. The WebGL and Web of audio APIs can be faked so that in me, not VR only, but in mixed reality applications where you might have scenes like the like an application where you have somebody talking and it's doing some speech to text inside and doing some clever things around that. I have demos which does that. So you can automate those parts. You can have things which automatically does that and you can see that okay, is it really doing the transcriptions? Is it working as it's supposed to? So you can fake those inputs and it still will not break your application and you measure the impact on your code without the overhead of these APIs. So I would argue that Web is ideal platform for making augmented VR open and accessible to all of them and if you develop the tools and technology to kind of support and let everyone use it, this will flourish. We have already seen Linskart, Nike, even Google and everybody kind of experimenting with it. You already have in your Chrome when you search for certain things, it will, it can show you augmented reality thing, animals right in your room from the browser and eventually these experiences are gonna come and we need to be able to ready to support these experiences and able to test for their performance. This is more important for VR and MR because if a website doesn't open in specific time or if there isn't any web page, the most that can have an effect is attrition. So people probably will leave your site soon. You will have less footfalls in your website. For VR, this has the potential to make your user nauseous and ill potentially in temporary portions. That not only makes the user averse to using your application, it also opens up in certain geographical scenarios. It opens up a avenue to actually get sued if it is making your clients really uneasy and not healthy. So it's very important to test your applications. Most of the things I have talked here, these are very much applicable to native applications, but how we test the native applications is part of a completely different talk. Some and some of the things I have talked here including the applications and how the world sensing goes on, these are kind of the links you can visit and you should be able to learn about them and use them. Almost everything we do are open source, so you will be able to use them, just keep in mind the license. And with that, I would like to thank everybody. You can contact me in my email, in my Gmail or just tweet me in Twitter. I am pretty much respondent in both of those sites. The web access specification and samples are available in those two links. This talk will be available on this link later today. And you will be able to see this talk. I will also send like a PDF version and on interactive version of this talk or to Solidium Prom, so that will probably be available to everybody. And I would like to give a thanks and shout out to the mixed-re-team, the last mixed-re-team who worked in WebExR. So, Beliar McIntyre, he's a principal scientist and also a professor at Georgia Tech. Trevor, who built a lot of these things. Fernando, who was my mentor. Autonomon Beatrik, who was a platform manager in Firefox OS, who got me involved in this. Sam Sander, he's a creative technologist who built some of the demos, one of the demos I showed. Lady Adakin from Samsung VR Team, who builds a lot of these web APIs into Samsung browser so we can actually predictably use it in Samsung devices. And Angel Kavin, who was in the initial team who built a frame. With that, I think I'm almost just have five minutes. So, thank you for listening to my talk. I know this is early in the morning and sitting in front of the laptop. I will be open to any questions you might have or any discussions, anything you might have. Thank you, thanks a lot. So, the first question I see is, what is the automation tool used? This is a homegrown command line tool we built, GFX Test. It is available in GitHub and one of the links I showed in the resources page will give you kind of the idea, the capabilities of the tool, how you can use it. What I wanted to emphasize is that this is still a very, like before this talk, we used mostly for internal purposes and this is still a very much work in progress tool and there are a lot of things you can do on top of that. Since this is open source project, you can use that. And this is built by Fernando. One of the persons I gave a shout out to who was my mentor and you are free to use that tool and build on top of that if you want. This is not part of any test suite so we don't have integrations to anything but like I said, this is a command line tool and you can integrate it to your pipeline and the links are in the resources slide. Thank you, thank you everybody.