 Hi, I'm Philippe Covell and I will explain today about immersive web of twins So who am I? I'm a French engineer from Brittany and I've been involved into open-source software for a long time And currently part of the modular reps problem. I've been involved into industry in the past mostly embedded Linux And I'm currently available so you can reach me at pearl.org RZR where you can see a previous presentation or Demonstration and I'm currently available for cooperation or creative jobs So where are we coming from? I want to speak about some concept from the 50 somebody prototyped personal Theater, so this is the ancestor of virtual reality also in Haiti the sci-fi cyberpunk rotor William Gibson mentioned about cyberspace where people could interact outside reality and one game changer is a doom First-person shooter game which can be installed on a PC and provide a very intense experience Then a couple of years later the 3d went online with a virtual reality modeling language and X3D evolved after the years But the integration wasn't that smooth because you need to install an extra plug-in inside your browser So there were no major commercial success Second life is a game which was native and not on the web which was quite interesting because people were able to create content and share them through the game and Then there's a WebGL API in the browser provided the JavaScript to get accelerated 3d in the browser So that's an iteration issue has been gone And we were there are many devices on the market like mobile phone which are shipping orientation sensor So you can put this into a cardboard VR DIY headset and you can move the mobile and the orientation sensor will update the camera view So from this you can create also augmented reality application like Pokemon Go which are just 3d content over 2d Stream from the camera So let's talk about openness of immersive reality So two major project is a I want to mention is an open X So it's a standard from the Kronos group that want to solve the fragmentation between augmented reality and virtual reality drivers and top-level Layers for creating a middleware of application So you have an up to supplementation which is a monaddo from Linux from Collabora which is using an open and wanted display and Similarly on the web we have the same high level API to provide abstraction to device or application So the sensor from the 6th degree of freedom which is indicating the position of the users and the controllers to pick as Interact with objects So in the end you can create applications that can run on different device on different browsers and Firefox reality is one of the browser Which is implementing the web XR application through the navigator XR object So how to get into Web XR application So if you have a very headset and it's probably supporting a web browser if it's not you can install Fox Fox reality or even on a mobile phone you can use a card box Via headset and using the orientation sensor you can use a 3d content and it will remove at the same Speed you are moving your head and if you are using this on a regular browser You can still see the 3d world, but you don't have this immersive feature, but you can emulate so sensor you have on the VR headset using an emulator extension So that can be also useful for a developer to create a web XR application you can use a high-level framework So the lowest one is a web GL For the open GL and the JavaScript then you have the same graph using 3gs Which is being on the web GL and a frame is based on the 3gs in graph and it provides some custom Web component to create a 3d scene like if you are already writing a html code using tags So the hay box tags for instance is just a cube Another framework is Babylon GS for Microsoft is based on web GL so you have decent performance also And I want to speak about the GLTF format, which is specified by Kronos also. It's compressing the assets to make them easy Publishable on the web. So it's a GP for 3d and you can use this in a json structure with compressed geometry and Blender 3d is supporting 3d GLTF export So the web as a platform as we know is not only flat It's not only for 2d documents. We can be 3d and you have some immersive Feature which is a web XR which is superseding as a VR and It's dynamic because you can create a script application using JavaScript and it's cool program all inter-reportable with web services So the web is traversal you can jump from one world to another People different people from different culture can interact also and you can also and interact with a connected device So let's talk about Internet of Things. So what we are talking here is about a connected device that can be used on the Internet That's another word for interface for accessing a sensor value or changing some actuators value now the web of things is a specification from the W3C working group which is providing some Commodities to describe the things and makes them accessible on the world wide web Modzilla made an implementation called the web things and Now let me mention about the digital twin concept. It can be Defined as a live replica of physical entities. This means we have a model which is moving at the same time as a real object Web of twin experiment is something I made to try to bind The web of things to the external reality. So I've been using the web things web of things API and the AFrame framework for the rendering. I made this Robot from a couple of motors so I can control each motor individually So on this dashboard, I'm changing the angle of several motors and it's updating in the real world and also in the 3d world So this robot is made of different robots. So if I'm moving from different angles, the robot will move The claw here and I can change the orientation of each part of the robot and the same time I have this 3d model, which is also updated in the background so both are working at the same time because they're connected to the same web thing gateway and It's not that smooth because this is just the order of the Angle of each motors. It's not a smooth transition If I have running on my phone, you can see different orientation and you can also use a VR headset and using the embedded browser you can look at the world like if you were Moving it on a flat 2d desktop and you can switch to the immersive web when you can look around the object and See if there is any collision or if this is moving as desired So let me talk about the web thing Platform so it's a smartphone software. You can Use home to control all your device all the device are Connected to gateway and you have total control. There is no sub-party cloud involved Everything is made with privacy by design and you control it everything from a UI dashboard Which is very simple to create a basic automation and from its extensible So you can support the new devices or new protocol and there is off hundred community contribution So everything was possible because it was made on The simplified version of the web of sync description from W3C Another demonstration using a web sync and VR So I'm playing with some sensor and I looking at different shapes and the 3d model is updated at the same time so you can use a Different headsets and having this augmented reality view using the exo-kit from Magiklip and also, I made another application which is Like another view for the gateway. So what we see all this control on the Left is a dashboard of Mozilla gateway when I have all different switch controlling different color of what I have on my Raspberry Pi I have some sensor and so on and I have this Switch which I can control this fan and I made another switch Here which is also Viewing the 3d. This is the same The same dashboard but in a 3d world. So I have this MQTT smart outlet and if I connect If I can press on it, it will toggle on my plasma lamp so all everything is connected that will perform different perspective and This is just an immersive dashboard where if you have a 3d VR headset you can jump into this Virtual dashboard where you can get access to different controls and Some monitoring so I have this Don't like a dome where I can look around all my switches and decided to interact with some of them another one is using the the camera streamer to As a background from the 3d dome Let's move on. So how this is possible So it's pretty simple actually because from a sensor you are I'm providing some web API using the web things API and it's providing a real-time web socket or HTTP verbs where I can Connect this thing to a gateway and then I can share in a secure way my device through the internet using a web token Then another extra application can listen to the update of the actual thing and update 3d model of clinically so it's works in both direction and you can use this Component as an example. So another simple example here. So I made a Also I have the gateway running on the Raspberry Pi and on top of it. I have this extra board with some extra sensor like as a temperature and I have also A lamp which is a matrix LED where I can change some properties of my lamp like as a message Properties which is updating this text scrolling and welcome to mid-graphic meetings and my Next job is out. Can I connect this? Smart light bulb. So it doesn't work this way because it's a Bluetooth mesh so I had to write an adapter and connect the adapter to the Dashboard then it scans the device I can add the new device and Then it appeared like another thing. So if I press on this Shortcut it will toggle it on and I can change some specific Properties like the brightness of the lamp and it will be updated at the same time I can change also the color. So I made a 3d Application so it's basically a digital twin of my lamp So if I'm changing one of its property, it will be updated at the same time so to view this you can use a VR headset and If you don't have any you can get this super cheap Cardboard where you put your mobile on it and using the mobile sensor it will display the right Camera so if you need to switch to VR mode Then you have two different view for each eye and if you have this extension in the browser you can also simulate your sensor position. So if I'm Getting access to my headset here. I Can change the orientation of the headset. So the Display will be updated like if I was moving around If you have other device with a controller, so you can also trigger some Events on the controller's button that can be useful to create Interactive application. So now let me show how does it work. So I have HTML5 here. So if I'm looking at the source code imported this A-Frame library This means that I'm able to use custom web component and describe the scene compose of a sphere Which is a light bulb and the cylinder for the bottom the screw of the light bulb and Here I have the color property where so this is what I'm changing and those two keywords which are web thing and properties so web things is this web component and the web properties is a Bidding function to adapt the web thing's update to the XR view so in the update function For different properties of our thing I Can decide to update the color if I'm changing the own property And I can set it to white when it's on and to gray when it's off That's fairly easy and if I want to change the color properties It's super easy because the schema are aligned. So if I'm changing the The color value from the model it will change and for the brightness I'm using the roughness which is quite similar. So this component is open source You can have a look at it if you want another last demonstration using Mozilla hubs So hubs is a Meeting platform for where people can join a virtual in a virtual world and have some Live chat using web RTC. So you can talk together using voice and each user is represented by its avatar and you can move in different direction evolve in the world. The audio is specialized and Both both user are here on mobile and desktop are sharing the same environment But it can scale to many different users. You can add your own custom assets. So I made this model of this small house toy and On this house I have some sensor when I'm moving the house in different orientation. It will update the Model position in the virtual world at the same time So to do this on my roof of this real house I have a Raspberry Pi which ship This sense art sensor with the accelerometer gyroscope and the magnetic so I can know the direction The angle to the north. So if I'm moving in a different orientation the angle will Slowly Change and slowly converge to the actual position to the north So if you want more details about this experiment, you can link You can check this link power.org slash as the earth way of twins and your feedback is really valuable to me because this is just some Small step experiment. I'm doing over the time and I want to make it easy So your feedback can be available. You can check previous demonstration and also the source code most of it is open source and if you have any question I am in the LGM meeting room so you can Ask me on matrix now or later online. Thanks for watching Okay So I'm listening for question first While people are preparing their question. I want really to thank all the team in rent Active design to set it up this event online. I wish it was in rent But maybe next year who knows and Yeah, I hope everyone enjoys that video Um, okay, what can okay no question here now, you know in the hierarchy, but you can contact me any any time later on IRC must don't Twitter or so and Yeah, I try to to share many sample projects. So if you want to get started The easiest way I can suggest is try to emulate the device first because if you don't have the device you can start the Create application that with a fake device like mox mox sensor and so on this is a the easiest way to get started and once you get your VR application running then you can try to substitute the simulator with an actual device It's quite the same. You just need to It's the same API and the same the same web code will be the same. So yeah, try to to separate problem and do the Start from the hand and then you can try to substitute but there is no real order. You can start doing IOT stuff and then you can Go to the Full picture. So I have a question From me. He said that this is very interesting if compared to processing which programming level who an artist have to test Okay, first, I know about processing, but I never used it. I believe it's Targeting for different people from different mindset because I'm I'm not an artist I'm I've been enjoying doing some demos on the Hamigas and so on but I think I'm more almost a more software programmer As a programming level, I think there is something interesting if we're doing things in a web because the JavaScript is really Popular language many people are using it True framework nowadays, but the language itself is not that difficult and I think it's interesting to start with Just stick to the basics of the language so in the browser you have a lot of API so I Believe it's quite easy when I learned JavaScript. I wasn't really into Low-level programming so I get into it quite easily if we compare to Python. This is the same kind of Levels to get into into it Okay, somebody Luna is Saying that it's always fun to watch what other Modena are doing. Well, I'm not actually a Modena I'm just contributing to the project and now I'm part of the reps program But I've won them to involve into the design of the project and he asked her about the Swedish translation. So yes There I think it was a maybe not the last the latest series But the one before some localization has been introduced I've been asked to do the French translation and somebody did it before I started So I hope this is the same for For the Swedish, I don't speak to Swedish, but you can double check and if there is no none you need to Provide the those translation string. It should not be that difficult, but it will not surprise that it's already there Another question What is a minimal stack to work with? Like a mock sensor in pure in browser JavaScript Will you recommend a minimal stack without hardware except for the PC? Yes, that's what was I was referring just before the question started Well in my case I'm using a Node.js Application which is just a server with a REST API who are talking using a JSON schema You can run also on a different runtime like IoT GST, which is Very similar to Node.js, but the targeting a very low constraint device But if you are working on the on the PC for instance owner minimal tag hardware except for a PC. Yeah, good question. So When I have I've been working on the microcontroller like let me take this mode here. This is a STM32 Which has an internet port that can be really convenient if you want to get access to the internet So to the no not the internet the full internet, but the local network so you can connect it to the Modzilla gateway, which is running on the Raspberry Pi if you're not You can run the gateway software on the on the regular PC too as there is a docker container and the Debian package and RPM also and So I know that's other device like how do we know so ESP 32 8266 and 32 are quite popular so that you can get started with this for Not a big investment Yeah, you can and also maybe I didn't have them here, but you can also buy some smart outlet and Then you can replace a firmware with a custom one that is supporting the web thing Raised API. So yeah, that's you can get a very nice IoT device speaking Web thing not deeply without too much effort. That's something I want to publish Maybe later to avoid to use already community firmware like TASMOTA or others About other question Is a web of seeing API Specification practical our Chrome and Firefox up to the piece and the implementation much diverging Particle easy to work with the real application So the web of seeing application Is mostly a protocol to have things and to repable That's not something a user adjunct like a browser a web browser will directly use This is not made for the browser This is more most made to have a smooth integration using a mess web technology between different kind of device so in my presentation I spoke about the gateway which is Connecting all the things together and then providing a web API to connect to the things but as I said Once it's easy so targeting the user. There is no much IoT here. It's only a graphical user interface Things about the web of seeing are mostly as a Language where things can talk together not Directly any user or user have to use a graphical UI to do the job to talk to the same not to the Directly access to the same unless if you want to make a core script and try to access a different resource of the thing But it's not something made for user browser. Yeah, I Hope I and I answered the question I can under the more question. I think I have a couple of minutes later and Yeah, if you want to get started I can suggest you first to join the the discourse forum from Mozilla about IoT this IoT project. I call it's called my Mozilla IoT. I can share the link in the in the in the chat room and We are also chatting about the project in a matrix room It used to be on IRC before on Mozilla server, but it has been moved to Two matrix recently so that can be convenient because you can see the log when it's offline Somebody has one more question We have still time 31 minutes until the last talk of the day last talk or Next talk Maybe there is none of them So most of I may be all of what I shared has been published on github. There is different projects so try to Identify which one and you can find some issues there that will be helpful because I have no idea about how usable is this Yeah, okay. Yeah, that's talk of the day because just after this is a workshop So now next talk there is a workshop and a lighting talk. Okay So, thanks everyone. I just hope the string went fine and I'm really happy to have this event to Tokyo online. I hope it will be different, but yeah, that's the way the open source community is reacting to All kind of hazard and I hope everyone is fine and just take care about What you are doing and keep it free Thanks, everyone. Okay. I'm see I'm telling you goodbye and you can find me on the chat room later today I will stay here. Thanks and see you next time Bye