 The video machine is Ross Ross, user name is Ross, password is Ross. We hope to find history. I'm going to stop at 23. There's a meta package. Yeah, that actually... We need to answer that way. That one contains a bunch of stuff. If you install desktop food, I think you'll come up with more fighting desktop city questions. How long is it going to take? Huh? Yeah, if I have... Yeah. Wait, so this is a piece that becomes... Copy that... Yes, you need to come up with the rules. Right? Yeah, figure that out. That's how I'm going to do it. You'll be able to do that. Okay. It's basically without the... Okay, so for the workshop... I'm going to modify it so that it's... So I'm just going to read it about... I'm just going to write it down. Okay. Okay. Yep. You guys... You can... Okay, so... I know some of you are still sitting up and all, but... Sorry about that. I still haven't managed... I did try... I promise I did try to do this workshop over Kola, which is the online version of Jupyter Notebook, but I still didn't manage because it makes a lot of use of our terminal. After my chat with Johan yesterday, maybe I'll do it. Next time. Hopefully. I know we don't have much time, so I'm going to introduce very quickly Abidam Ross. I talk a bit about Gazebo not too much because we don't have time. And hopefully what you can get out of here today is mostly an idea of how Ross looks like. Yep. Anyone here using Prudu Prudu 10? Okay, is there... If anyone's... If you were Prudu Prudu, I would always like to report that we might install other Ross packages and the refills. So you want to modify the source list with the Jummy Jellyfish program rather than anything new? And then there's Restore. Yeah. You want to use the victim machine in the middle? Or we can have a container? I think it's not actually used. Should I use... The one from... I'm sorry, should I use... Now, what would that be? In four of them. In three of them. Of course, I don't know what they have. They just want... Yeah, but at least not one. So which one do we have? Okay, guys. Back to the introduction. I'm sorry for the problems. If you have extra problems, you can come talk to me. I can help you out. I'm happy to get everyone on board with Ross. And hopefully today, when you can get out of this workshop, it's an idea of how we work in the robotics world. Because it's a bit different of normal software engineering, I would say, or not the same as other softwares. And a bit of what you can do with Ross and how would you use it? We're not going to build any autonomous cars or killing robots, but we're going to get an idea of how to thinker in with this software. So to give you a bit of background, for those of you that didn't... For those of you that didn't saw the presentation on the first day, we... In the Open Robotics Foundation owns a few projects, and these are the main projects that they're working on. There's a bunch of people working on this from many different backgrounds. The problem in robotics is huge, right? We don't have people that know everything in robotics, so everyone picks a very tiny part of the robotics problem, and they just work on it. And that's why I did my presentation on the first day, where I was trying to explain that this is a really important... having open source, taking over robotics is a very important point that many robotics move forward, because no one can really make the entire thing on robotics. The problem is to be not only a single person, but not even companies can make or can solve the entire problem by themselves. So because we have open source, that is what is allowing us to overcome these boundaries. So the main software that the foundation owns is ROS, and it started back more than 10 years ago, and at that time the robotics field was mainly focused on research. So the main focus of this software was for researchers, right? It was very easy to set up. It has something that they could work on, and there were no extra things that you had to worry about. But then there was a slightly change in the robotics world, where a bunch of startups started coming up, and they all started taking over this work, and then companies started also using robotics and all that, and I got a new set of requirements came on. So people wanted more stability, people wanted more security. So there was a rewrite of the software, and it's what we call now ROS2, right? So that's why I'm trying to get everyone on board with ROS2, so we can learn the latest software. ROS1 is still running, and I think I have a slide with the releases so you guys can get an idea. So there's the latest release of ROS1 based noideknead gemmys. So we always use startups for names, and that will be supported until 2025, but it's already the last release that is going to be done on ROS. There's some community work for continuing ROS1, but that is out of the foundation, and it's not going to be supported from an official standing point, right? Then we have Gazevo, which is the simulation environment that the OSRS offers, and it doesn't have to be used with ROS, but of course it has a lot of packages that make it very easy to use with ROS. There's Ignition, which was a rewrite of ROS in a similar way of what happened with ROS2, so there was a few requirements that people wanted to meet, and then Ignition was a coupled version of Gazevo, but then there was a name conflict, so now it's a bit complicated, but I wanted to put the name here, so if you see Ignition these days, it's basically Gazevo. You won't see it around because it was renamed after a while. It's basically Ignition is now named Gazevo, and Gazevo is now the classic. It's known as the old Gazevo, right? So just in case you see this name around, you can't ignore it and think about it as Gazevo. So there's two versions, and there's one that is called Gazevo, which is what they initially was called Ignition. A bit complicated, but yeah, I have to label it. And then we have OpenMF, which is the software that we are developing here in Singapore, and it is in collaboration with many local institutes and companies. And it's mainly using, of course, we use Gazevo for simulation and ROS for the communication part, but it's mainly for infrastructures to try to allow vendors to communicate robots between each other and also communicate with infrastructure. It has task management, traffic allocation, and all these capabilities. Okay, so a bit about ROS. There's a video here but in the interest of time, I'm going to skip it. There's a very nice video if you guys have time. The Red Hat guys made this story about robotics. I think it's called Robotics Revolution and it's got four parts. Each part is about 10 to 10 minutes. So it's a very nice, much nicer than the introduction that I can give you guys here. So if you guys have time, you can have a look at it. Yeah. So, what is ROS? This is how we always present ROS to people. So basically you got four key features on ROS. One of them is the plumbing. So this is the basic thing that ROS gives you. Because in robotics we use this component oriented programming, so you basically have different components that go to each other, right? So if you have a camera, you would have your camera capturing images and then that would be one component. And then this component will communicate with another one and then these images will be processed in this other component who I don't know, the data person or the data hand or do some processing of this image. And there will be different parts of components that will be talking to each other. So this is the plumbing problem. And this is mostly what we're going to try out today. So we're going to try out this plumbing and see how these components can communicate with each other. Mostly through the terminal. We're probably not going to be writing any code. And then we're going to maybe play around a bit with the tools. So there's certain tools that you can use on top of this that have been created by a lot of people, the community and OSRAS itself like R2D or any others that allows you to introspect what is happening here. And it also allows you to visualize frames. You can see an entire kinematics tree depending on what is going on. So in the products there's a lot of debugging and a lot of problem solving. And you really need these tools to really introspect what's going on. Right? Then you've got the capabilities. So as I said, there's a lot of people that is doing different things in robotics. So you get the guys that are doing navigation. So that's an app to stack. So it's an entire group of people that is doing this thing. And it's got to be not related to the guys that are doing control. Or the guys that are doing, for example the stack for moving arms, kinematics, right? So what we do, which is the guys from Pygmy, for example, it's an entire company that is doing this part, right? At least in charts. There's always a community behind. So these are the capabilities that we have. And then of course the ecosystem that the entire open source community is offering. So you've got this course you've got the conferences because this entire ecosystem where you can just reach people and say, hey, I have this problem I'm trying to install this. It's not working on my computer. Can I use this thing? There's the working groups that you can join. So it's also complementing the entire set of capabilities that you can get with ROS, right? So a little bit of the ROS 1, ROS 2 comparison here. So I think one of the main things that everyone sees when was using ROS and now is using ROS 2 so these communications between components that I was talking about in ROS 1, there used to be a process called the ROS core and you would have to set it up all the time. If there was a guy in charge of telling which components are where and how to communicate between each other, right? So you say, hey, camera give me the image so you need someone to tell you where the camera is, right? The process that is running the camera and it's like I can call this function, get image and then I can get it, right? So that ROS core was what had to be always running. That was a single point of failure and everyone was complaining, right? Like they were like, oh, if this ROS core goes down, my entire robot goes down. So it's a system that is not very robust. So one of the biggest changes in ROS 2 is that we use DDS for this communication channels. So now DDS has a distributed way of discovery. So with this distributed discovery, you don't need a ROS core. You don't have a single point of failure. It's not perfect and it also gives some problems when you cannot broadcast and all those stuff but it's way more robust and the thing that the new design comes with is that the DDS implementation it is acknowledged to the DDS implementation. So there's different vendors for DDS that they provide this DDS layer and there's basically ROS middleware layer that gives an abstract. So you can basically use ROS 2 with different DDS vendors. So you can see here that you can use these stop words for communication and there are different implementations in class that DDS is one implementation of DDS and Cyclone is another one. So sometimes depending on how the vendor implements DDS they might have better or worse capabilities for your application and some people decide to go with one some people decide to go with the other one. Depending on metrics or whatever capabilities they need from the vendor. Sometimes they don't even have the security plugin implemented so maybe you need the security planning then you might go with fast DDS. I believe Cyclone doesn't have it or I think they added it later. So that is one good example that you can run and then just focusing a bit more on ROS 2 basically you have your hardware you run your operating system so one thing that always people get very confused ROS stands for robotics operating system no it's not an operating system it basically runs on top of a real operating system I know you have to be careful when you make things right yeah so it basically runs on top of an operating system and it's basically a framework that you can use for robotics then you have the DDS layer which is the one talking to making the talking to each other possible you have the media abstraction layer that we have to be able to change this and then we have different languages libraries so you got RCLCBP, RCL5 and basically these libraries allow you to write different components on different languages so you can have components in one component in Python that is talking to another node that is doing C++ so you can have all this entire ecosystem that is totally heterogeneous this is one of the things that actually allows ROS to be so distributed so you have the guys doing navigation and they have their own components whatever design they have and whatever languages they decided and then you have your other guys doing control with a new different design and with a different thing and then you can integrate this thanks to this model, right? thanks to the component oriented model so again a little bit about the releases that I mentioned before ROS1 has the melodic which has one moment of support and then you will die and then this is the latest ROS1 release and it's the last one so this one is actually supported by 5 years so after this there will be more support for ROS1 okay, these are the releases for ROS2 so we got Foxy here you can see the Ubuntu versions as I mentioned before when we were doing the setup they usually match the Ubuntu version so you can't really you can always install it from source but the compiled binaries are created for a certain Ubuntu version which is usually the latest that is around when we are creating the release I believe the life of these releases matches the life of the Ubuntu version so if the Ubuntu lasts this long then the release will last this long and now we have this rolling release so the rolling release is basically where we push the latest changes and then every time we are going to have a stable release we just branch out of rolling all these packages that are the latest and then when Galactic comes then we create a branch and then we can just grab images all the time and it's publishing them on a topic so basically topics come in the name of something like slash and a name and then you can use this to say I want the message that is being sent on slash and this name so for example the topic will be slash camera so we have slash camera and then that will be the topic where the images are being published so from the note that is subscribing I can say hey subscribe to dash camera and then get the information here and then I will get the data in my data structure whatever data structure we are using we will be getting all these images that the camera is getting that is the how to subscribe and then it doesn't have to be like one to one so you can have one guy the camera can be publishing these images but then you can have different notes doing different things right so you might have one note that is detecting humans and then you might have one note that is detecting tables right so you can distribute this as much as you want you just need to subscribe here to the camera topic and here to the camera topic then we have services which is basically like for those of you that use RPCs it's just basically like calling a function that is in the other note so you can in example of a camera you can use the service and you can say instead of having the camera publishing the image all the time you just say hey give me an image now you get a response it's a different way of communication and then at the same time you can also have different notes requesting for the same thing and the service provider will be answering in the same way and then we have something that is a bit in between that is called action so basically actions involve a client of course a client an action client and an action server but you can see it as a mix of the other two so basically you do a request for an action which is meant for long term goals so let's say I have a robot and I want the robot to get to the table then I say go to table and then I make this request and then I get a response of that request this service that we called before I do like a service request happens and then the robot says okay go into the table that is the response that I get so I know the robot started going to the table but then there's a feedback copy so you can say like slash that status so maybe in this status the robot is like moving you can have different or stop you can have different maybe someone has in the middle and then the robot says stop and it's still going to the table but it's in status change so the robot copy will allow you to keep getting the information of what is going on and then at the end there will be a response of there will be like another request and it will basically say hey I finished that thing and the action is finished so this is this is meant for like long term processes like a bit more of instead of like just one piece of information so let's get a bit into the raw thing I'm going to run the let me check if I have I don't want to do how much time okay so for those of you that got virtual box you should have something like this right and if you go to the feedback page which is the one that we're going to follow there's a few tags install you can get the install instructions also there there's a click there's a link to the install instructions and then one thing that we usually do because in ROS is very common to have many versions of ROS in full we don't recommend to use them at the same time so we don't recommend to have a robot talking humble and then or some parts of the robot in humble and then some other parts in foxy we don't recommend that at all because it's not guaranteed to have compatibility you can try but usually you mess around with several versions right so because this happens usually they're all installed in OPT ROS and and you will have the number the code name of the release over there so what we do is that before starting you have to source this setup which will set up your your terminal environment basically for to access all the binaries that ROS is offering and all that so that is the first thing that you have to do if you know that you're going to be using a certain distro for a very long time then you can put it on your by-start C and then it's always there right but personally I mess around with a lot of them so basically I just most of the time I just make an alias for the source and I say like humble is my alias and then I just run it so like if we do this on the virtual machine yes go in so we open a terminal so if I open a terminal and I try to write something like ROS there's nothing there right so the first thing that we have to do all the time is to source OPT ROS humble and then we have this bash file there right set up there's a bunch of setup files so once you do this you already have ROS the environment ready right so now you can see that there's a ROS2 binary there and if you run it you can see the options that you can get right so you can do actions you can do backs so backs are basically a way to record what is happening on this communications channel so one thing that you can do I said that we do a lot of testing in robotics right so one thing that you can do is to record all these topics going on and then once you record all these topics then you can replay them because you can replay them you can basically have let's say you're walking in the lab during the day and then they close up too and then you have to go home well you don't have access to the robot anymore but you have a recording of all these topics rerun all the topics and it's basically like working with the robot again and then it also allows you to reproduce the exact same thing that happened that is very important in robotics because problems are very you cannot they're very stochastic right you can't really reproduce what happened at that time because maybe someone passed by the light was not the same a lot of things can happen cross back that allows you to reproduce the exact same moment and you can see what is the error that actually caused whatever breaking thing that happened in that moment right so corner cases are very very important in robotics there's this the demon theme block doors most of the the most important one are the roast note one which allows you to mess around with notes run basically let's run stuff binaries which are these roast notes launch allows you so there's launch files that they will launch files are XML you can do them in Python now there's a new option to do them in Python but originally they started in XML and basically they are they are a way to set up an entire a bunch of roast notes that you want to run with all the parameters right because usually you end up having very complex setups like let's say you have a big robot right so this this guy can run like maybe 20 30 roast notes with all these parameters so you don't want to keep typing all this all the time so basically what you have is you have these launch files which are basically descriptions of I want to run this roast note with these parameters and I want to run this other one so all these descriptions you can just do roast launch and you do you pass it a launch file and then it will set up the entire thing in one time right so that is a very important one and then you have topics and services and you have actions over there so you can basically see so like if you do roast to topic for example which is a very typical one then you can see that you can list the topics right you can also you can also publish and subscribe directly from the terminal and you can you can echo whatever is happening on a certain topic right so we don't have much time but I want to show you the how to run a note so if you go to the first one roast to so there's a package called turtle sing which is the one that you will find if you go to the roast tutorials so this package is basically it's a very simple so it's a very simple example that you can play around with so if you guys set it up and run you can see that so one thing that you will also notice is that we use a lot of terminals when running roast so every time you run a note you might have to open another terminal and all that unless you use launch files but you will see a lot of these robotics guys opening a bunch of terminals and then I have to source again remember and then I can do list so if I do roast to topic list you can see that I have some starter topics here right so these starter topics will actually allow me to move these starter around so if you guys if you guys start playing around with these topics you can send different messages that will make this starter move and this also it offers services so you can also play around with the services that's another another option and I believe it also often it also offers an action so you can also play with it right so this one will do a rotation and will keep giving you feedback until it's done I think we don't have much more time so I'll leave it there again sorry for the delay but it takes a long time to start running this that's why I wanted to have this GitHub page so whoever has the roast to running can go around and play around with the turtle that would be a very good way to start and then I want to also mention at the bottom there is a small part where you can play around with code so you can create your own workspace and then you can start working with your own workspace so basically hopefully you got an idea of how the communication works and then it's just a matter of you guys playing around with the rest of the commands that you have yet who have travels installing running the Victor machine or anything related to the roast let me know I'm going to be here in the robotics track until it ends and I'm going to be in post-Asia until the end too I think we're going to have a space later after the after the conference but feel free to chat with me in the coffee breaks or anything I'll be happy to help thank you guys hold on to okay very good morning to everyone my name is Yaldanan I go by Yalu for short and today I'm excited to talk about a personal research project of mine it's something I'm curious about and it's still in a very experimental stage and the idea is can we map in digital twins for autonomous navigation in the real world I thought myself before I begin as Marco mentioned I'm a software engineer at the open robotics team at intrinsic and I helped to develop maintain release several core packages in ROS2 as well as OpenRM and yes it's really my privilege to be here today to talk about talk about my interests before I start can I get a show of hands how many of us here have worked with mobile robots before or even better done any kind of mapping of autonomous navigation oh wow that's a lot of people okay okay so I'll try not to smoke as much try to be factual yeah let's get it so today I'll be talking about a few things we'll start with a high level overview of what autonomous navigation is and the role of mapping in this process some of the practical challenges with mapping in the real world and this concept of digital twins and how do we generate them and how do we build them and then we'll get into mapping and navigation and we'll talk about some pitfalls I've observed along the way so it sounds like most people here already know what autonomous navigation is but for the sake of completeness in robotics or self driving autonomous navigation is basically the process of getting an agent to move from one point to another where your only input is telling it where you need it to go or arrive at and the student is able to figure out how to get there and along the way it's avoiding obstacles it's finding the most optimal path to get there and it's respecting certain constraints you give it how fast you can go how fast you can go etc so there's several components that go into autonomous navigation the first is you need an agent and typically this agent needs to be able to sense the world around it this is in the form of let's talk about mobile robots for today sensors come in a variety of forms you have lidars, cameras stereo cameras etc these help the agent develop an understanding of the world around it we have our eyes, stereo vision that helps us perceive the world we also can infer depth from this similarly in robotics we have sensors that can do the same the second is we need a map this is generally true for indoor navigation we need a sort of map to help the agent know where it is in space and how it can navigate around the space a map is basically a representation of the world that the agent is moving around and the next important component is localization so once we have this map or this representation of the world an important part of navigation is for the agent to figure out where it currently is in this representation of the world and this process is called localization or it's able to infer where it is right now in this representation of the world and finally there's a lot of planning and control that goes into navigation so once you give it a goal it needs to plan a collision free path to get to the destination and the wheels have to move to get it to follow this path and that's where the control also comes in so I would say this is a very high level basics of basic overview of what autonomous navigation is so what's mapping here? I think we talked about it briefly mapping or commonly what's practiced is simultaneous localization and mapping or slap and this is really solving the problem where we need to construct this digital representation of the world for this autonomous agent and we need to do this while the space is originally unknown so we just throw a robot in the space it doesn't know anything about it it needs to construct the representation some representation that it can then use to firstly localize to plan and then navigate right and this typically relies on a combination of different sensors to perceive the world so linears are quite popular in the field of robotics and self-driving cars and these are basically usually laser based cameras and they shoot out a beam of certain frequency a light wave and then it reflects off the surface comes back can often tell you how far away that hit was and sometimes we can also infer the intensity of the light that came back so we're able to tell whether this is a solid surface, glass surface, shiny matte etc so linears are getting really advanced these days we can do 360 degrees can so it's not just a single point that goes out it rather shoots around in 360 degrees and it has a field of view in both the vertical and horizontal domains we have IMUs, inertial measurement units that keep track of the current kinematic state of the robot how fast it's moving, how fast it's accelerating etc and there's cameras and there's a whole bunch of cameras that range from monochromatic cameras to stereo vision to depth sensors etc and typically the output of this mapping process is this representation of the world it sounds pretty abstract but from a practical standpoint it's typically like two formats the output is generated in for 2D representations of the world we use something called an occupancy grid and that's what you see on the right hand side here you can think of it as a PNG image and it's just grayscale and if the pixel is black it means there's an obstacle if it's white it means it's obstacle 3 and gray areas usually mean unexplored or uncertain it's a very simple representation but it did you can think of it as we generate a grid of the world imagine this 3D space we just project it down at a certain height so I choose this level maybe one meter above the ground and then I imagine I cut the plane of this room right now and if you project that down that's what this image looks like so it's generated as specifically another common storage format is 3D dense point clouds so if you have sensors set up that can generate the 3D description of the environment with these 3D lidars or other kinds of sensors you probably store it as a point cloud format it's just a whole bunch of 3D points that store it can go up to like millions or maybe tens of millions of formats even there's other forms I haven't mentioned here but you can even do like visual based slab and then in this case you're storing keyframes or so you're looking at this image right now and then I want to store this corner of the table certain key things that I see in this image and this will typically be stored as a bag of words or something else but those are some details that we can ignore for today so as a robotics engineer I've spent a lot of time working with mobile robots and typically when we want to deploy a robot somewhere it's based mappings the first thing that you do and personally I've been very frustrated by this process because it requires you to manually drive around a robot either tele-operating it or sometimes as a big enough robot you have to ride the robot to drive it around the space and generate this representation of the world and it takes time especially in really really large environments something like this room can be done in a few minutes but if you want to map this entire institute for example it's going to take a lot of time and furthermore if the layout changes for example we close off a certain section we add new furniture or something changes physically you probably have to remap which means you need to come down and do this process all over again and so this kind of gave me this idea to explore which was basically can we map in simulation and then take the output of that and then run it directly in the digital world in the physical world and yeah so video games one reason I do that is although I like the graphics in the real world the gameplay often isn't as fun and I feel the same way with mapping right like it's something that's annoying in the physical world but I think the gameplay can be improved in simulation and this is where gazebo comes in gazebo is another open source tool that we distribute I'm sure a lot of you are familiar with it it's a simulator it's a physics based simulator that does physics rendering it has a whole bunch of plugins that can try to emulate the physical world as closely as possible we call this high fidelity simulations you're not just able to reproduce visuals from the world but also interactions contact forces and various other physics elements so how does gazebo help so like I mentioned it's a simulator and with simulators you can define the rate at which the simulator can run and there are some constraints of course depending on the hardware that you have but you can typically run the simulated world at a faster rate than the real world so I can say you run the simulation at 10 times real-time rate so if I move the robot if I'm sending a command to move the robot for one second it might move the robot as if it was getting a command for 10 seconds in simulation so you can move the robot faster and if we can move the robot faster it means theoretically we should be able to map faster all that data at this higher rate and there's some other benefits too like I said we can replicate the physics in simulation and we can build really specifically why gazebo is because we can generate these very realistic worlds there's a cool talk I've linked here by my colleague who talks about photo-realistic simulations so how can we build simulation environments that look very similar to the real world another cool part about gazebo it has all of these plugins a plugin you can think of it as basically a component that emulates a physical system so there's a lot of model plugins that emulate various kinds of sensors so in this case we're interested in like LiDAR sensors and there's some images here of like LiDAR sensors in gazebo you can see the rays hitting an object and detecting an obstacle and the cool thing is you can these sensor plugins are very customizable you can define the type of noise that you want this sensor has a Gaussian noise has this mean this standard deviation which you can typically find in a data sheet so that's really important because you don't want to get super accurate results from this simulation right because no sensor is that accurate in the real world so being able to model this noise really really helps and the cool part about gazebo is like I said one challenge is dealing with environments that change simulation if we can quickly rebuild that simulation world deploy the robot do this mapping at this faster rate I think that solves a lot of the annoyances that I talked about earlier so what's my plan here so this is the idea that I came to mind and I started exploring so can I build a photorealistic digital twin of a real environment so this is start with the gazebo representation of a space that actually exists can I run all my SLAM algorithms and ROS nodes and all of that with the data from the simulation so the sensors from the simulation produce data I'm using that data to generate this map and I take this map which is the occupancy grid that I talked about before and then I give it to a real robot and then have the real robot move in the real space so this is kind of the research question that I'm trying to explore here so the first question you may have is how do I build these digital worlds quickly and a tool that I want to share is something called RMF traffic editor so this comes from the open RMF project that Marco alluded to earlier and this is a great tool it helps you very quickly build 3D environments the only input you need is a 2D floor plan so you start off with a floor plan such as a PNG image you can download from the building website or get it from your landlord and then this tool lets you annotate these 2D floor plans you're basically annotating features that you've seen in the floor plan like here's where a wall is or here's where a door is etc and then you're very easily able to take this annotated floor plan and quickly generate a 3D environment that has all the physics and everything in visible and this slide kind of covers some of the features of traffic editor so there are a few ways to add walls so the first gift here we're highlighting the walls in the floor plan and the second gift we're adding doors we support different kinds of doors from swing, sliding, telescopic most of the kinds that we see at least in Singapore you can tell you can annotate where the floor is you can even give different textures to floors like this is a marble floor this is a carpeted floor etc and these models that's the most important part because we need to have these models that we see in the real world so there's very quick ways to drag and drop these thumbnails of models that will eventually get generated as the physical model I won't go into too many details of how to use traffic editor we've done a Rosconn workshop last November last October there are some slides from that workshop we've used this tool and we also have documentation linked here so that's something we're checking out later so once you have this annotated floor plan we have another tool called building map generator and this takes the the annotated file it's a YAML file and then it basically auto-generates your exebo file format which is a .mode file so it's just as simple as running the script what we typically do is we automate running the script when we build a map file we have some hooks that will automatically run the script so it's a very seamless process you're just having fun with your imagination and annotating this map and then you save, you build and then you can immediately open this 3D world in simulation so that's what I did here so this is a floor plan of our office in Singapore in one month I started with a floor plan and using my limited imagination I tried to reproduce or annotate that floor plan with the different elements and here are the results so I'd love to get a feedback on this and all of this is open source by the way it's in our RMF demo so possibly there's a link to that later but I'd love to get a feedback on this on how close you think it is to some of the images it's hard to tell now if you haven't been to our office but I think it's pretty close maybe I'm biased alright so the next thing is we have this digital world how can we now generate that map so the cool thing about ROS and Gazebo is whatever code that you want to run on the real robot you can run in simulation there's no differences maybe you just have to say use the time for my computer or use the time for my simulation instead but rest of the code everything's the same it's the same nodes, same logic same topic so everything's the same so it's not like this is just a visual simulation and then when you have to run something in the real world that's a completely different code it's the same code so whatever code I would use to map the real world I'm using the same nodes the same launch files here to start up these different nodes that I need for mapping but before I can do that I need to spawn this robot in simulation there's a couple of tweaks I make to benefit from simulation so firstly I have to make sure that the LiDAR model in simulation is comparable to the one in the real world but then I also do some hacks some cheat codes I increase the range of the LiDAR because my real robot has a range of maybe 12 meters and then that means if I want to map a wall that's beyond 12 meters I have to drive closer to that wall but in simulation I can just say no my LiDAR it has the same noise and then it can actually hit 20 meters or 100 meters ahead so that's really cool, I have to drive lesser now in simulation so that's one of these hacks that I do I also increase the field of view maybe my real LiDAR has I don't know 270 minus 270 field of horizontal field of view in degrees so in simulation I can just say I have a 360 degree LiDAR I'm sure LiDAR manufacturers hate this one quick tip that I just wanted to share the most important thing here is I need to keep the LiDAR at the same height as it is on my physical robot so I mentioned that the way this 2D SLAM works is we cut a plane across a certain height of the 3D world and we project that down so I want to make sure that my LiDAR in simulation is at the same height as the LiDAR in the real robot so that's just adjusting some of the model values to make it at the same height and then you run the SLAM algorithm and then you save the map and saving the map again is just saving that complicated grid so here's a cool video I should have had it playing while I was explaining but since it's a robot that you can spawn in simulation so all the blue lines you see are the LiDAR hits and as you can see I've cheated there with using a longer range LiDAR and create a horizontal field of view so I can map really fast though and I'm probably moving the robot at a much faster rate too so I'm done and that's the map that gets generated yeah yeah that's a great question so as part of the model description you can specify what are the friction parameters of the contact points so there is a certain representation of this robot and it has wheels and so depending on the physics engine we use ODE which is an open source physics engine you can give it certain contact parameters this is the coefficient of static friction in dynamic friction there are some other contact parameters that you can actually take continuing on so how good is this map the image on the right side is the one I got from the simulation and the image on the left side is what I generated it's called robot in the real space so I think it's pretty close obviously the real robot has a bit more noise so maybe I didn't model the noise as well as I should have and maybe there's a bit more uncertainty in certain areas and the LiDAR hits a much cleaner in the simulation but that's fine even when you drive in the real world the real world is never going to be the same as when you map it it's always going to be humans walking around there's going to be some differences we rely on this localization a lot to counteract some of these differences that we see to still maintain an understanding of where we are right now so I think it's not bad I think it's not bad okay so the poop is in the pudding does it really work I'm going to play this video and what's happening in this video here is this camera feed on the bottom left is from the camera mounted on the real one I sent the map to the real robot and then we're running the navigation status navigation to running and the first thing you need to do is localize the robot so you have to give it some help where are you initially starting off in this giant world so I kind of know where it is in space and then I kind of localize it you can see a whole bunch of green dots around it so this is a probabilistic localization we do with AMCL there's an initial belief of where we are and eventually that belief updates the robot moves around the algorithm updates to firm up its understanding of where the robot is so as the robot moves you see that those green points kind of converge to strengthen the belief of where the robot is in this world so I'll play the video at some point I had the thought of also opening up the digital the twin to kind of get a cool side by side of the real camera feed and the digital twin so this is the robot that's initially localized I give it a few goals to see if it can move just waiting for that belief then we have the camera then I give it some goals that are initially nearby to see if things work that was pretty surprising work and then give it a few other goals so it has a robot autonomously navigating in the real world with that map that regenerated in simulation and this is that digital world that we use for simulation so we're right now in this kind of corridor that's facing this country here and then giving it other goals I thought this show was pretty cool and here's a cooler image of that localization so there's a lot of line art noise so it doesn't know where it is but then it immediately localizes you can see that line art just the ANTL immediately snapping and the robot figures out where it is and then it's able to try it to that my desk in the office so again this is like very early work and I've been trying to explore Athena's great potential a lot of things that can be improved of course and I'm really hoping to talk to more people here who had experience with mapping and find ways we can make this part of Ross even how are we at time? Do I have time for a couple more slides? Yeah. This is in the Oscars they start playing the music in the walk-up so a couple of gotchas I experienced during this process obviously one of the big limitations is the models so to build this digital twin you need to have models that exist in the real world so that takes effort honestly that's the most challenging part of this process but even when robotics and even the community has contributed a lot of models and we host a lot of these models on fuel that's like our drop box for models so a whole bunch of them everything that I used to build the digital twins out there so that's a great place to start and if you end up building models and you want to contribute this is where you should help a couple of them so others can also reuse them and some nuances it runs a physics engine and physics is important for sensors so one thing I realized was initially when I used a certain GPU model sorry I used a certain LiDAR model in simulation I was using this type called ray and I found out later that it just uses pure CPU alone to figure out where the ray hits each of these lines are kind of raised on laser beams that emanate from the sensor and it hits some point in the world so with this type of sensor I realized that oh it's actually just hitting the collision models the collision meshes of the models so every model you can think of it as having two components it has a collision component collision is also mesh, there's two different meshes for every model, there's a collision mesh and then there's a visual mesh the visual mesh helps you render what it looks like the collision mesh it can be the same as the visual mesh but something that we do to improve the performance of our simulation is we use the simplistic collision meshes these are like primitive rectangles that kind of encompass the bounding box of the model and this really really helps speed up simulation and in the RMF project we're simulating really big worlds so this is important to do and I realized that actually our technical artists helped us craft these really nice collision meshes but it's actually a problem with this type of plugin because the rays just hit the collision mesh and then if you map with this you end up with these kind of boxes this box here is this back of this chair that's actually the collision mesh so it's not going to be accurate so the robot could not localize all I had to do was change that one line from ray to GPU on the score ray and now it's using my GPU and with GPU we're able to do all kinds of fancy ray casting and now you can see that the ray is actually penetrating the collision mesh and they're hitting the visual mesh so that helps generate the more accurate representation of the world and now I'm at the end of my talk and I'm happy to take any questions let's leave this playing yeah, please go ahead so given that you have built a digital twin of the world what would happen if you use that digital twin slice it digitally and convert the mesh the slight mesh of the result directly into a up-to-date grid would that work? yeah, I think that would work so if you export this entire digital world it's a single mesh and then you take a slice of that mesh and somehow you generate an image from that view it would work but the problem there is you're not accounting for the sensor noise and that's the most important thing with mapping but if you had a way to factor that in that would be pretty cool I'm using this so I'm sorry if you question that a little bit so the sensor noise does affect the output grid it definitely does so if you look at the map that's generated from the real robot even these walls here it's kind of a wavy line because the sensor has the certain noise it can't be super confident about exactly where the laser is so there's always this noise that's important to model and the goal is to do that in simulation I think that's the challenging part and the cool thing is because Evo has a way to model this kind of force there's a much yeah that question yeah I'm just wondering like in the situation of a deployment right why do you want a noisy map like if you can do a perfect 2D size you get an ideal map wouldn't the navigation still be better even if there's noise in the real world yeah the problem is your localization algorithm needs to be better in that case if you have a very perfect slice of this occupancy grid when you run your robot's navigation stack with that map what you're doing essentially is you're getting live sensor data the laser is actually hitting all these points in the world and you're getting a live sensor input the goal is to take the sensor input and find out which part of the map does it actually match to right so if your model is super perfect but your sensor from the real robot has some noise that localization is fighting extra hard to just snap to the right position so ideally if you spend more money on good sensors and you know you have a really good odometry on your robot you could get away with that I think it's totally costly thank you could the robot be potentially okay I know it doesn't mean that the drone that means it's flying so something like Terminator we have flying drones around the second question is we took a ride by bikes to land post 1 that's a go with all the bikes okay on the way back we had an intersection that this is not yet on google maps and if we were sending a let's call travel this car as your robot would he end up on the sidewalk or would he be able to navigate this way around that yeah thanks for the question so two questions there one is can we do the same with a drone like flying type of robot and I think it's possible definitely so we have drones that are can be fitted with the same type of sensors lidars odometry that can map the world people are already using this type of technology with drone navigation indoors so I think it's certainly possible to do that just probably the representation of the world would be different here we're dealing with 2D optical secrets maybe there you're storing a 2D point cloud or maybe you're storing this feature feature map or bag of works that I talked about with visual slam the second question is about I think navigating in unexplored territories I think this really depends on the navigation algorithm itself mostly with autonomous cars you do need a map but you're also relying on things like GPS etc to update your belief of where you are in the world I think it could be possible it really depends on the algorithm whether the algorithm allows the agent to go into an unexplored territory when it's not mapping okay so I see that there's like two extremes that you can go to one is like making it more like the gaming simulation where you have an absolutely amazing kind of photo realistic digital twin to work with the other one you can also go to the other extreme where you just go with more fun instead of any simulation like the gentleman mentioned that you can just take a cut to figure out the map which way would you foresee this project going I'm definitely leaning towards a simulation approach because it doesn't restrict you to just relying on 2D occupancy maps sure the slice approach works for 2D maps but this kind of approach even lets you try other more sophisticated navigation stacks what if you're doing visual navigation just pure vision based navigation that's taking in two stereo images figuring out where you are when you're doing delivery robots for example do this they look upstairs they look at the ceiling to figure out where the robot is that's called stargaze navigation so if you want to do stuff like that you want to try that out in simulation you want to have that same kind of representation in the real world so I think there's more benefits to keeping it general but if you want to save computation I think that's also another approach if you want to try 3D slam this would also work what if you want to test some perception algorithms perception based some deep learning inference or you're doing something else you can use the image from the simulation and then run your inference I think there's some benefits to this so we don't have more time for questions so we don't exactly publish in 2015 but you will start to see I mean this is where it starts to see by most of these so we have HITU from IDI it's a research institute in Italy and we have a starlet from Italy and we have the very first version of CHITO 1 from MIT and we have the very early iteration of SPOT from Boston Dynamics so fast forward in 2020 then we start to see a more industrial version of this and then we start to get to see these robots in the wild so by the fact in 2020 you will see SPOT in Mission Park what's that make sure that you actually do safety distancing and then we have anniversary which is a spin-off from Research Lab in Italy and we have a Mission 68 in Lake Agua which is from China and you will also see not just from Research Lab for industry but you will also see a booming or emerging trend in the open source community not just software but hardware domain I think one reason is that there's a huge boom in where hobbies or enthusiasts are having some access to hardware like the 3D printers and disintreaters that allow us to build such capabilities so the journey started in 2015 so I should say this is really what inspired me to build the robot and I started back in 2015 and really curious on how complicated is it to build such a robot so back in 2018 I started to dig some of the papers and do some research and in 2019 I started to build a prototype so once you see the body post of the robot and you see the robot walking and the last one is to test the dynamics of the robot how it's able to run and how much can be done when open source and a year later it proved the robot into much heavier here to bring some autonomy of the robot as what Yalu has presented earlier you can actually do create a map of the robot so on this presentation it's using rails this time it's the same but it's using lakes right so if you see that black thing on top of the robot it's actually the sensor that allows the robot to generate a map and eventually make it a tunnel so when I was building it there's a realisation that happened so if you look at the rail system you'll see that we have we have a simulation stack for most of the robots out there we even have a humanoid robot from NASA we have for UAVs we have brown robots we have for underwater we have manipulators but we don't really have a simulation stack for bunch of robots so I had the idea to make it a framework so that the community can use it so the idea is to have a development infrastructure for arbitrary robots and a simulation environment to allow developing new algorithms and accelerate this technology further and of course to be able to build low cost platforms in the future so here's the problem maybe you have a fancy car or a nice chassis someone gives you a nice chassis but you guys have no engine on it so that's pretty bad so I think it's the same what's happening in the community now so there's a lot of open source URDF files of which the company is Boston Dynamics the problem is it doesn't really come with an engine to make the robot work but fair enough because it's a proprietary system and there's a lot of IP through that so the simulation tools are mainly to for you to visualize the robot but it doesn't come with the capability to make the robot work so the idea is to use CHAM as the main engine and provide you the algorithms simulation stack and as well as software tools and from there you can build so what is this package for you can use it as an educational resource if you're starting to learn robotics there's a lot of algorithms that it covers meaning from the promotion, state estimation and you can learn ROS and you can see for it so if you're coming from if you want to use this tool to write your high level applications let's say you have a robot that you just want to make the robot to patrol or get some high level stuff you can do so it comes with a few software tools that you can use off the shelf so you can navigate and it comes with a couple of filters and it also comes with very popular robots it's one of the most famous in the industry and it's also for research it comes with all these tools that you can leverage on and focus mainly on the research that you're working on so you follow on this history and maybe this package is for you so we'll go into the details but some of the features so it comes with a setup assistant to generate the package for robots any moving users how many users move it here so setup assistant is pretty similar to moving assistant where it helps you to setup let's say you have a new robot you just build or design a new hardware and you want to create an engine to make the robot so check also comes with so if you don't need to configure these robots so you can easily use these robots off the shelf it comes with a local motion controller if you want to do high level applications so the idea is not to replace Boston Analytics engine or be the state of the art but to provide you an infrastructure and it comes with a local motion controller for users to easily use the whole stack it comes with a workflow like so if you're simulating so maybe you start off with a UATM and then you generate the profit package using the setup assistant and then from there you use gazebo to find you in the PID constants on your actuators and then from there you can start your simulation work and then if you're building a new robot you can use whatever you've learned in the simulation then here are some of the things that are generated on the setup assistant so it helps you configure your simulation work and as well as you can do mapping you can do navigation and as well as the pre-configured files to do all these capabilities and all the semantics of the robot so I want to go into details into this so this is the setup assistant so you can either do tell the assistant the namespace of each leg so what this means is that the left front leg is just like an LF namespace and then so you can either tell the namespace of the robot or you can manually tell which parts belongs to each part of the leg so for instance the left front leg and then you just tell which leg belongs to those parts of the leg and then with that you can generate the configuration from there so here are some of the examples of the robots that have been pre-configured in Gazebo so we have some of the popular ones we have a spot for Boston Dynamics, we have Animal C and we have MIT miniature here so here's Animal C in Gazebo after being configured in the setup assistant and it says hi so here's the high level architecture so the controller accepts two types of inputs they control it using velocity inputs maybe the twist messages or you can control the full body force of the robot so the idea is to have a hierarchical approach in the control system maybe the controller has an interface and the state estimation it's coupled from each other that allows users to actually use their own components if needed so for instance you want to have your own controller or you want to generate an MTC based controller and you can do so you just say you're working on a really possible learning base controller you can do so as well so you can use the infrastructure and train your own agent to do and deploy on Jamf to actually make it work so the hardware interface changing from the physical robot to the gazebo robot is instead of changing the launch files so it comes with hardware interface and also gazebo plugins to actually generate the robots so it also comes with the state estimation where it allows you to calculate the high level velocity the current pose of the robot and it does that reckoning as well so to know where is the position of the robot from the orange so there it is with these capabilities you can leverage on existing boss packages and take the robot as a usual normal base so the idea is that these capabilities are transparent to the user so you can take it as a normal mobile base or a total one and then you can use mobile application to do that on these capabilities so for the control inputs so for velocity inputs it has 3 degrees of freedom you can control in X, Y and as well as the angular axis and for the body post it has 4 degrees of freedom you can change the height of the robot you can change the roll the yaw and the roll kitchen yaw so here is an overview of the paper this is based on MIT Cheetah one so the idea is that this is a IRECA controller so the body controller which accepts the body post input from the user so in a nutshell basically this allows the robot to do the translations of the N effectors mainly the next which is used as a reference point for the detections so next we have the base generator mainly decides as to which time it has to be underground or as to when it should be swinging open there and the electron transformer basically calculates the trajectory of the foot the rotation and then how far is it so if you look at it from the bird's eye view these are the points next is the foot planner and next is the IT engine so the IT engine supports different types of form factors so we have conventional one and you have an X type and a few more form factors so for development there is also a visualization tool that helps you debug the N effectors so this is one of the example applications done on Boston Dynamics spot where it does the chicken head so it does hit some of the singularity here that's why it's one of the artifacts on the robot itself that actually shows the ability of the framework to help you debug and reduce all these visualization tools so how are you able to integrate this on a real hardware so it's pretty much similar to how you would do it in a Ross control interface when you work on a tracer on a ground robot so the idea is that you have your hardware here and then the framework itself has templates as to how to publish all this data so all you have to do is to actually write back onto the encoder or the hardware APIs and just let it into the template templated publishers as well as the subscribers for the actuators so this topic actually publishes the target trajectory of each actuators so the idea is to get these angles and use the actuator API so that you can eventually move the robot so it's the same for the hardware so using the same topic name it uses Garcibo Ross control to actuate the transmissions in the virtual world and as well as it generates the joint states as well if you want to use any for feedback or state estimation so for the state estimation so for those who are not familiar state estimation is basically the way you want to get some sense of the position of the robot or basically create more situational awareness of the robot based on the intrinsics of the data that's being produced so given the full context full context is basically whether a certain food is on the ground or not and the joint states is able to calculate the current pose of the robot and as well as the speed of the robot and it chooses all these data using a common filter to have a more robust data in the automatry not just automatry message type and it also generates and also it sends this to the TF to work on more uses so these are the topics that's being published and subscribed so if you take a look at the blue highlighted topic names these are the common topics when you are working on a autonomous robot so if you have a thermal bomb this is pretty much what you would be seeing before you make the robot autonomous so having said so the state has the capability to make it autonomous so this is one of the demo using MIT Kita doing autonomous navigation so the same thing it requires you to create a map and then use the map to get the robot localize itself on navigation here's another open source hardware using chat so even the map can actually send the code to the robot so here's the open source project based on chat so this is a research work for Manus so the idea is to get a much better robot to pick up from a conveyor belt and move it into the second floor and be able to climb steps so here it's starting to pick up the object from the conveyor belt sit up and move around so how this works is it uses a chat boot banner and then uses another open source package called green map to actually calculate the costs based on the depth data being generated from the sensor so this is a cost based heuristic as to find the most probable the safest place where the food can be located on a certain time and then from there it sends it to the iK engine to eventually make the robot move so in summary chat does a set of assistant that you can use and comes with ready to use controller to be used for your experiments the idea is not the best main controller but to actually create an infrastructure for you to generate your own controllers these are all software tools they can use in ROS compatible right now it's in ROS 1 but ROS 2 is in progress so contributors will really appreciate that and yeah it's totally a tool and that's all thank you so if you want to reach out and you want to be contributors just speak me in any of these yeah thanks is Luca also a colleague and very good friend of mine from open robotics the open robotics team and he's going to talk about ECS for robotic simulation yeah so everyone I will try to keep it short and then the original time because yeah we are running very late so hi everyone I'm Luca and I am a simulation person and why am I a simulation person because robotics are painful so personally I started with drones and I found out so many times that like maybe you are trying to test an algorithm and then a connector comes loose or you lose GPS and then your drone falls from the sky crumbles into a thousand pieces or decides to fly into a trip so it's just very painful when you're trying to test some algorithm and then you need to deal with all these problems in the real world when you just want to isolate your variables and just test your algorithm so I think that there are benefits to simulation which is okay making life simpler than the real world where you don't need the accuracy of the real world and together we also like doing like CI testing so if you want to make sure that your algorithm didn't break anything then you can just like you know run around about some series of simulations and make sure that your your system still works and this was probably mentioned by Yadu or by my colleague Yadu before already and like the ideas that we can use like the same code in like simulated world in our simulator case info and in the real world to make the transition as simple as possible so like the product that Yadu mentioned before our open source simulator I will actually focus on any of our new iteration of the simulator which we call like GZ where we call the old one classic and the new version of the simulator is designed to be much more modular so you can plug and play different rendering libraries or physics libraries depending on how accurate you want your simulation to be let's say if you want very accurate physics simulation or like very accurate rendering or if you want something that is faster but less accurate but the cool thing about this new version of the simulator is that he also uses this paradigm which is the title of the talk it's like ECS your entity component system coding paradigm so what is this ECS this entity component system paradigm it's very popular in game development and that's where it was gone and the idea is to structure all your code base and all your logic into three different three different structures let's say you have entities which are identifiers for each object and usually it's just like just an integer components which are containers for data or markets assigned to entities to like model properties or like specific abilities of entities and finally you have systems and systems are just functions that act on entities just based on their components so let's say if an entity has a specific component it will do some operation otherwise it will not the classic example there is even on let's say game development is that let's say if you have if you want to model like a player then you will maybe have a health component so now let's say there is in your video game there is like some explosion and you want to like change apply damage to all the all the living things then you would just look at all the health components and you would like update that singularly so this is what like a system would do we would just look at this data storage components and like would apply functions to that what are the benefits so the benefits is first of all like you know extensibility and encapsulation so the idea is that every single component models a single behavior like you know you can have health or you can have like you know gravity or you could have like you know a player component or anything like that and then like if you want to like add a new behavior you just add a new component and that doesn't affect any of the pre-existing logic so like you know the rest of the system is more confusing and changed which makes it very easy to like extend your simulation or your software and also like it encapsulates things very nicely because again like a component is like a standard of structure it's also great for performance usually because it's trivial to parallelize because you are to use the player health example I used just now let's say I'm going to update the health of all the players we'll just update we'll just work on a specific component that is maybe very different from I'm going to make things fall from the sky that is affected maybe by the mess component of every of them so the idea is that because all your systems can act on different types of data they can be trivially parallelized because they never have to like it's very easy to detect whether they need to access the same data or not to make them parallel if that is not the case and then because from a performance point of view it's a bit more of an implementation detail but usually it's much more cash friendly so it can like perform a lot better than like normal object oriented programming but this depends more on the implementation so what are the drawbacks well the first drawbacks is that I'm sure that none of you in this room or I actually feel that none of you in this room have ever heard of this paradigm before so it's very unfamiliar and it can get some time to get used to it and like how to use how to write code that follows it and like that reaps all the benefit of this paradigm and then also because it's usually very heavily parallelized there is both benefit and the drawback because sometimes it can be a bit unclear what the ordering of operation is so if you need like specific specific order like I want to update the player health because before I update like you know objects falling or anything like that they could like you know require some further talk so it can be can be tricky from the point of view but anyway enough enough like enough abstract talking I will just like illustrate like a very simple use case that we ran into in the OpenRMF project of like a traditional object oriented approach and like where the limitations were and how using this paradigm helps overcome these limitations but the sample use case is simulating doors in RMS so a door is a very simple object it can either be open or closed and then we want to simulate we want to simulate that with RMS so we want to be able to track their state so that robots can know whether doors are open or closed and then we want to be able to command doors so we want RMS to be able to tell the door can now open or close so those are the capabilities of a door and how does it work well let's say you have like let's say you have 100 doors like each door would have a simulation plugin where it receives commands it does some logic let's say it was requested and starts opening and so on and then it publishes its state and then you would do this like once for each door or object that you want to simulate now what are the issues with this approach so the the main issue generally speaking is scalability so as you move let's say we want to simulate like very large facilities and like you know let's say hundreds or even thousands of doors as you as you like duplicate these objects like you know thousands and thousands of times you might reach bottlenecks because now let's say every time you update your simulation you need to like run through every single update of every single door like in check if they receive any commands like you know check if they need to publish state and so on and then the probably the most critical one it's a much more raw specific issue which is it is due to the the usage of publishing and subscribing because now it means that every single door will have to listen to the commands and they will have to receive commands and check whether it applies to them or not and then like and then like publish and then publish the state but this is more of a raw specific issue but it's actually the main one that created some because what happens is that because we are using simulated time then all the doors will be initialized at exactly the same time and then all of them will publish the state at exactly the same time and this actually creates a fair bit of issues or like your queues become full, you lose message and so on and like you know we can get around but like just like doing some like random initialization and so on but they really doesn't solve that it really helps with it's more of a hack rather than like a problem so because of all these scalability issues then let's try to take this gaming industry paradigm there is also extensively using a Zivo and use it for to see if we can scale up this this problem and now the so this is what the architecture would look like so it's the same you would have okay let's say 100 or 1000 north Braggins but all they do is that they create a component so so they would just say okay every Braggie will just run once at that and he will say okay I am a door and then he will not do anything anymore so this are all out of the window and now the only thing you need to run on every iteration is your system and now the system will just iterate through all the components and then like you know like do the actual logic so the main benefit here is that now you have like first of all you have a single publisher and a single subscriber that makes it that decreases a lot the overhead of like you know receiving messages and like deserializing messages and also like in Ross terms he also decreases the the complexity of like all the publishers discovering all the subscribers and so on but generally speaking it's much simpler and much more like cash friendly and like much more performing much better performing approach and here like you know screenshots of code but like don't really need to look at the code but in general like what you do is that again you create that component to say okay this entity is a door then you create that component to say okay this and this door has been told to do this operation which is closed or open or closed sorry component then you create a component to say whether the door is open or closed and once all this data is created and populated all like all the logic is implemented in a single system that will execute logic based on these components which will look something like that let's say so for example something that needs to a lot of code but the details don't matter so let's say something that needs to process commands you will look at all the command components and based on the command components it will command a door to be the open or closed on the other hand like you know let's say something like the system that needs to update the states you will just look at all the door components and then you will just update the door state component based on what the state of the door is so now the real question is ok there was a lot of like a lot of added complexity and it's like you know very weird way to write code where like you will populate the data somewhere and you have the data somewhere else so the question is should you do it I will use my friend here so someone would say yes yes yes yes I hope like this is familiar to some people but to the people who are familiar with this they would know that the follow up is also no no no no what some people might not know is that there is a software engineering version which is the independent version so the answer is that it depends on your use case of course so it's very useful if you want to re-scale up re-scale up your simulation which is the reason that this whole coding paradigm was born in the video game industry because in that case performance is super critical and you need to be able to handle very complex words and timing is very important because you must not drop the frame rate or the player would be very frustrated so if scalability and performance is what you are looking at it's a very useful paradigm but on the other hand if scalability is not an issue and you just want to simulate one or two robots and not very complex it might probably not be worth it to restructure your whole code base but for cases like OpenRMF where we want to do very large scale simulations with large PLDs, large number of robots and we want to run them over long periods of time to make sure that everything works the answer is probably which brings me to my left slide I don't feel it would be a complete talk in 2023 if you were not talking about Rust and how it's awesome and how much you love it and how the world would be better with more Rust on the left you will see the traffic editor that Yadu showed before but we are looking into the next generation of our traffic editing and traffic map generation pipeline that actually is based on this paradigm on this entity component system paradigm is written in Rust based on a game engine it's 3D, it's awesome so if you want to play with it or keep an eye open then it is and that was the end of my rambling on entity component system and why it's awesome so thank you very much thank you very much Luka you guys have any questions for Luka so with these ACS like this model and is there any means you can broadcast to multiple objects like that for example and is there any way you can broadcast them in a big mass like door space so all distances of door if that would say so yeah so let's say if you wanted to broadcast everything to a command to every door then you would just have a system that looks for all entities which have a door component and then you will send the information to them so basically with this filtering you already like pre-select only the entities that you care about and then if you want to do some additional filtering you can do it like what type of door it is or on a big mass in the door as you mentioned any more questions multiple publishers subscribers right so in that did you consider using service or action to not have so many I think it gets even more complicated in that case because let's say what a service does behind the scenes is that you will publish a request and then receive a response so it's actually even more traffic going on the wire while in this case we are okay with the send and forget approach the topics give you but then the moment you need to implement hundreds or a thousand of them they're all published on the same topic at a very high rate and then they start losing message and so on how is the how is the surrounding support related to the ECS for example like document generation and testing like if I switch my OOP into an ECS would they all involve that in support of those no it's a different quality paradigm so you probably have to restructure your test but then being said I personally find it also very convenient for testing because again you can create a whole world but then let's say if you only want to test doors then you only look at the behavior or you only iterate on these door components or maybe you can even like only create basically gives you very fine control and it makes it very easy to select the data that you're interested in and either read or write it for purposes I understand it depends on a lot of factors but do you have a quantitative measure of how much the performance has improved using this paradigm over the traditional ones yeah so this is the million dollar question I think in this case it probably doesn't actually improve the performance that much what it does is like it solves the root problem of the scalability issue which is basically like the horrible hack and workaround that we had to do because now you don't need to find new magic numbers to say okay now I have 100 dollars I will create queues of like 10 messages and it will be fine but next time I do 1000 dollars and now I need to increase my queues I need to make sure that no messages are being dropped and whatever so this at least addresses that but performance wise like at least for this very simple use case it doesn't matter too much because it's just a very small application of the whole simulation like the whole simulation like the performance is the main performance hits are due to other parts like you know simulating physics or like the robot projects and so on but in general like at least from the experiments from the experiments that we have been doing with this like rough space game engine the performance is very great so like you can easily run like 60 FPS on your browser like as a web assembly have a full real time without any problem but it's still a bit early to say because we have not run large scale large wars on the web assembly version Last question anyone although for lunch ok I guess it's lunch time there are couple of announcements we