 So, for our next speaker, sometimes you have to fill a dev room, you don't really know what to do, and then there is one guy who's like, oh, I like to talk. So I'll present myself. The talk is from me. We'll still be super strict about the timing, so I'll probably, I also have a demo that might work and probably like it worked yesterday, but this morning kind of, and so I'll just ask you to cross your fingers, but anyway, we'll try. So, 25 minutes. High and augmented reality using JavaScript. That's the JavaScript dev room. So high and augmented reality means, in my opinion, the best commercially available device. You can take photo of this slide, but the very last slide of the presentation is a big ask your code that allows you to take all the slides, all the links, everything. So, what are we going to see? So what's AR? What are the limitations of AR when it's native? What's AR in the browser? Specifications, because it's so exciting topic. Hardware, like an experiment life that might kind of work, but we'll see. I have video backup just in case. Who am I? I'm Fabien. What do I do? I work for the European Parliament. I can't really say what I do there, so that's just me smiling at a conference. I also work for UNICEF. UNICEF and who knows UNICEF? Who knows? Keep your hands up if UNICEF has an innovation fund. Okay, more than zeroes. I'm pretty happy. For others who don't know, I'll also put a link at the end. But yes, you can also do tech for good. Also, I'm a Mozilla tech speaker and volunteer, so I present some of my VO and L work there at Mozilla all hands last year. So, to get everybody excited, it's not like we're not debate VI and EMAX because that's okay. It's not too hot, but what makes code beautiful? So, is that beautiful code? I don't know if you can see in the back. Is that beautiful code? Is that beautiful code? That's beautiful. So, we have different... It's too colorful. So, the idea is that it's kind of an excuse for me to show you after the shittiest code you've ever seen, but to argue a little bit why. In my opinion, that's beautiful code in terms of exploration. So, that's a presentation from Brett Victor on how you can have a problem to solve, and if you have a way to have a vision of what's happening in the background so that you don't have to process it yourself but let the computer do it for you, then you're going somewhere. So, that's beautiful if you're an artist or if you have to explore, let's say, a new API. That's beautiful code if you're doing... If you're in a demo scene, so if you have to do a 4K demo, if you have to be really compact, then that's the kind of code you want. And that's beautiful code if you're lean and sorvelled, and if you think maintenance and clarity isn't that good. So, that's up to you. But yes, expect the shittiest code you've ever seen. What's augmented reality? So, who has tried a HoloLens? Who has tried a little app on your phone to see how to paint your nails or how to change shoes or whatnot? Who has coded anything for AR? Who has coded anything for AR on the web? So, somebody knows it's possible. So, I think AR is really cool, that's why I work in AR, but I think also it's a lot of hype, and I think at the very beginning when you want to have AR, you're pretty much ready to do anything, which means including downloading and then installing an app that lasts about, I don't know, five minutes, and then use the app for about one minute. So, that's not really worth it. You spend more time installing the app than running it. So, I think if you're one of those big guys, nobody actually cares about your app. No offense, but it's a lot of numbers. So, yeah, just don't make an app for a two-minute experience if it takes you longer to download it. And most of AR now is just bite-sized experience, just a little taste, just a little something, but you're not going to spend three hours on it. So, AR, the definition, it's pretty simple. It's augmented reality. So, that teapot doesn't actually exist, and it has a URL. That's the one key thing to bring home. Or just for a history caller, it started in Georgia Tech. I mean, of course, if you do history in Tech, it always go back earlier, but in my opinion, that's where it really started, with a Blur Mac entire in our app modular. Specs like specs. The promise, of course, of WebAR is it's run on every machine possible. So, it means that all the major browser vendors, all the major hardware need to have it running. So, the idea is to have a debate, discussion on what is your specific device that is the most exciting, but still get together on what makes a proper API. So, basically, what you need for AR, like for virtual reality is the position of the headset in space. Let's say I start here, I'm at 0, 0, 0. I start forward, I'm at 0, 1. No, 0, 0, 1. Well, it depends on your code in the system, but basically, that you need to keep track of your position in space. Same for the controllers. So, that's, let's say, the basics. So, that's an emulator, plug-in for Chrome, if I remember correctly, that emulates the XRPI. So, you can virtually have a virtual headset in your browser. So, you can move it around. So, yeah, the specs are changing all the time, and hardware support grows. So, basically, hardware support goes from the HoloLens at the bottom to the Google Tango, to your normal iPad, to any kind of random phone with a camera, as long as you have a marker on it. So, this kind of device, it's about 3,000 bucks. The HoloLens is about the same price, so it's pretty expensive. I don't have it at home just for fun. I don't think it's worth it yet, but still, when you have an expensive device like this, then you have a depth camera, then you have a bunch of extra sensors, and all the processing is done on the device. So, it is, in my opinion, not worth it yet for an individual, but as a professional, it's the best way to explore. And then, there is a new contender. Magically, just made this device public for creators. In July, I think, last year. So, the concept of really how it's working. So, you have motion tracking, so you're able to see in space, thanks to either the depth camera or the RGB camera, and you do a fusion of those sensors. So, you're able to see, really, when you go from one point to the next. So, you have different techniques for it. And out of this, both your movement and scan of the room or wherever you're at, you gain room knowledge. So, for example, as you probably have tried before, where are the flat surfaces? That's usually where you will put objects on. Or the largest flat surface that is vertical. Or, so you can keep on building on those abstractions to have, like, detecting some more complex objects. One of those key difficulties is relocalization. So, for example, if I use my AR device here, and then I go away, when I come back, if I have elements in space, then I need to know that they are still there. They are back when I'm back. So, that's actually a pretty tricky problem. I was trying to think about a way to make you experience it, but it's a bit tricky. It's a bit like you close your eyes, and then a friend brings you somewhere else, and then you open your eyes and you're like, okay, where am I actually? So, you need to find similar markers or point of interest that allow you to compare where you were before. It's a hard problem. Yeah, it's a hard problem. In terms of browser and implementation of the specifications, if you just use the normal RGB camera, then you use a marker, for example. It works on pretty much anything. If you use more experimental hardware, well, you'll have to go to other lenses for now. Yeah, the different browsers, so you have Oregon, like I said, from Georgia Tech, Chromium, so from Google, with WebAR, you have the Modilla WebXR viewer. You can use it on your iPhone already now. And a bunch of different libraries. I'm not going to explore those, just for the magic clip. That's a cool video. I don't have time to show it to you, but it's like one vision of AR that, in my opinion, is quite this topic, where you have so much information and a lot about manipulation. So I think that's why I'm excited to present it to you today at first, then. Again, I think AR is pretty exciting. It's pretty cool, but then, in the end, it's what you can do with it. So it means what you guys can build with it. So, that's also the new best. So it's pretty straightforward architecture, so I can just keep that. You will see it after. It will make more sense. Don't worry about it. License, MIT. I'll do a little example. So that's... Oh, yeah, we'll see. The network allows. That's the result. Okay, no, we don't want to do this. So that's a point cloud. All the little dots you see, thanks to the depth camera. And then you can put elements in space. So those, for example, are my nodes that I just put in space, and then I can move around, interact with them, make them smaller or bigger. And then, thanks to the different marker, it makes in space. If I go somewhere else and come back, it will localize again and put the nodes back in the right position. So that's the moment. Let's see. If you have questions, that might also be the time, but let's try just at all if it works. I'm not religious, but if I was, it would be the moment I would pray. Okay, let's try. I can see you guys just in case you're wondering. You will see. Hopefully, everything works. You'll see yourself through. Maybe. So I need to make sure it's connected. Yes, it's connected. So I need to remember now, which... And I'll explain a bit more. Yes, yes. Okay, so... Well, I hope I have the same IP address, otherwise I need to... around a bit more. Yay! So, okay, I'll explain a bit. So you see what I'm saying in the headset from the browser. So on the headset, I think I can actually... Let me see if I can move. That's a bit abstract, but I'll explain. So... What do you think is the blue box? Do you see a blue box? Yes. What do you think it is? That's my eye. So I can see, I get the position of each of my eyes looking. And then what is the gray area? Okay, easier one. What's the red area? It's a flat surface. And then the gray area is basically... What is the gray area? It's a 3D mesh that's being constructed of the room. Okay. Oh, that worked. One of the biggest difficulties, to be honest, that's why I insisted to make this one demo work, is that it's really hard to convey. It's like a personal, individual experience. It's a bit out there still, so to have the video running so that you can see it. Well, that wasn't working yesterday, so that was a good surprise. So a little bit more... Oh, that was the backup, but I guess we can just keep that. Let's see. So that was another simple example with a little 3D scene added on top. So how did it actually work? Getting into the cables. So some of the basic functions, like I said before, or actually like you guys said before, the blue cube is basically eye tracking. So you have your position in space of where the eye is looking at. Some of the basic functions are to get the mesh of the room, to get the red planes, then hand tracking or eye tracking. So eye tracking also allows you to do more fun stuff. So that was yesterday at the JavaScript stand. And that was yesterday afternoon. So everything is super fresh. My first proper component for ExoKit, which is totally useless, but that was a bit actually earlier this morning. So I start the code that I made, and then it loads the ExoKit, which is kind of a browser, but not, I'll explain a bit in a while. So that loading moment is always a bit of suspense. So what's happening? So I have a 3D scene, super basic, just primitives. And then it blinks. We change color rather when eye blink. So what's the point of this? I'll explain a little bit more at the end. Basically that's another way to interact and something you cannot do basically with any device. So how is it done more precisely? So I warned you before about the ugly code. I think it's now, yes, it's now. It's ugly but it's working because I just work on prototypes and proof of concept. So I don't have to be code that's beautiful and attainable. It kind of works. And I put console logs everywhere and just because I can put two in a row just because I don't know what's happening. I don't know those APIs so that's the kind of exploration I do. So basically, can you actually see this? So who is familiar with A-frame? Just out of curiosity. Okay, same guy who tried before. So A-frame is basically how to do 3D on the web, either for VR, for AR or just for 3D. It's using web components so it's basically some kind of HTML. Super basic. For example, to have a box, you have a box. To have a sphere, you have a sphere. To have, I mean, you get it by now. So what I do, once I have those entities, I have a little component. So I register this component, which I call link. Then I get the entity where it is. I store the very magic function so that's the magic specific API. Then I can see actually you have data on eye tracking. And then every text with about few milliseconds. If I do have data on my eye and it is actually blinking, I change the color. So it's actually if it was presented a little bit better, it's really, really clear, really simple. And yeah, it's just straight from the browser. I'm using Glitch. I don't know if you use Glitch, but it's a way to host a web page for free and then you can run it everywhere. I think I'm pretty good on time. So now that actually works. The trickiest part, let's say, which is not about technology, which is a lot more about, okay, you got all those new sensors and we saw about IoT before. But then it has consequences. I'm pretty excited by this kind of devices, but that's a house of a colleague and then after that he uses HoloLens. And then you have basically, I don't know if you can, it's a bit tricky to see, but here you have like a semi-transparent plane, here you have a little bit of plane. So you're outside of his house, but thanks to the model built by the device you still have this kind of information. So if you start, I don't know if you heard about the Augmentality Cloud or the AR Cloud, that's very practical because before I go to see my friend's place and I can already see where it is. But if my friend is not my friend anymore and I've done some silly presentation, so that's enough. I need to stop seeing you. Well, I still have access eventually to this data. Or worst, if we're not even friends, but I mean, you know about security, you know that it's not perfect. Well, if anybody else, like an adversarial can have access to the information, that's a lot of problem. So those kind of device are so exciting because you can do so much with it. But that comes again with a lot of constraints on what actually information you can share and how to do it. So I become an yes. Those blog posts on the Modzilla blog. And then how to do it more precisely. Well, you need a bit of information. Like actually how does the device work? So you need to have workshops or presentation or help users to actually understand what are the consequences of their action. And well, you know the guy in the t-shirt, right? So to give a little bit of perspective. So that was Mark in case you don't. A little bit of perspective. I was earlier in Brussels this week at the CPDP conference on data protection and democracy. How actually maybe that's one of the most important non-technical part of the talk is a great book that started an article. Maybe some of you have read it on the age of surveillance capitalism. So that's... I'm excited by this. That's cool. I can do a lot of stuff. But then it's also sold by companies. It's also some of the stack is open, some is not. And those companies have an agenda. Does it fit precisely with your lifestyle or your need? Maybe, maybe not. And if it doesn't, well maybe if it happens once again with Facebook, with Google, with others, well if that pattern is there then maybe it's not by luck. I don't have this kind of book or she has also a shorter article on the topic, but there is a logic behind it. Which of course lead me to you. Somebody asked me yesterday, I was looking all dorky next to the canteen trying to finish the presentation thinking our shit's never going to work and then somebody came, looked a little bit at me like no, does he see me or not and then that person asked okay, so I showed him what it was, but what is it for? So I have a bunch of ideas I don't know if they're good or bad but it doesn't really matter so AR is for what you want it to be. That's also why it's first them, that's also why it's open source and free software it's because you have the capacity to make it happen, you have the capacity to actually implement really good. So it means, I don't know what it is for. I have a bunch of ideas I think I'll show you one little but the point is it's for you to implement. So ExoKit is on MIT license AFRAM is on Matilla license and those are just examples still so this little video is from a friend who was at the MIT Magically Packathon two or three weeks ago basically and they did an interface a binary interface how you can navigate in space using just a yes no option if you cannot see I'll zoom a bit more if I can so it's a person in the wheelchair because he or she cannot move anything except his eyes so that's the kind of usage you can think of you can use augmented reality for eye tracking and then for somebody who would not be able to use VR and AR for this kind of experiences also like I said I work for UNICEF and Innovation Fund so that's the kind of application usage you can use it for so little bits out there that you can use to explore few references and some of the doc and the slides so I don't know if you can is it high quality enough to scan it? you'll see anyway it's on the website so I have three minutes for questions or you get everything and then you use no need for questions too much 3,000 like I said to play at home maybe not if you need to work with it it's okay so for fun it's not worth it for exploration to work on it's worth it and then if you do any kind of software project I don't know you charge minimum 10,000 for a project that makes any kind of sense so 3,000 for 10 minimum still not negligible but it's not that much really in my opinion so I see a bunch of applications but the beauty of the Innovation Fund is we tried actually before to say hey you have a problem so I was in Ghana in Nigeria last week no two weeks ago and we tried before to say hey you in Ghana your problem is this your solution is that that didn't work so now what we do we search entrepreneurs for profit companies so that the wealth after and the knowledge that's spread in the local ecosystem to actually implement the solution because we don't know so we have ideas we have technology we focus on like whatever you the IoT blockchain decentralization machine learning, AI, VR, AR so we have technology we identify some places but how what is the technical solution that's precisely not up to us we evaluate the different startups that are applied to the fund but after that it's really up to them 30 seconds last question get everything I'll be outside then and if you have more questions because you're a little bit shy don't worry I'm there for like 25 minutes that's it thank you not that I'm a little bit shy it's not the microphone