 I mean, this is where I started to see most of these. So we have Haikyuu from IDI, it's a research institute in Italy, and we have Salet from Italy, and we have the very first version of Chita 1 from MIT. And we have the very early iteration of SPOT from Boston Dynamics. So fast forward in 2020, then we start to see a more industrial version of this. And then we start to get to see these robots in a while. So by the fact, in 2020, you will see a SPOT in Mission Park, make sure that you actually do a safety distancing. And then we have anniversary, which is a spin-off from Research Lab in Italy, and we have a Mission 68 in Lake Agua, which is from China. And you will also see not just from Research Lab for industry, but you will also see a booming or emerging trend in open source domain, not just software, but also in the hardware domain. I think one reason is that there's a huge boom in where the hobbies or enthusiasts are having some access right into hardware, like the 3D printers and DC treaters that allow us to build such capabilities. So the journey started in 2015. So I should say, this is really why it inspired me to build a robot. So this is one of the robots that spawned back in 2015. And really curious on how complicated is it to build such a robot. So back in 2018, I started to dig some of the papers and do some research. And in 2019, I started to build a prototype. So first you see the body post of the robot and then you see the robot walking in as well. The last one is to test the dynamics of the robot, how it's able to run, and how much can we do when open source happens. And a year later, it proved the robot into much heavier here to bring some autonomy of the robot. As what Yadu has presented earlier, you can actually do create a map of the robot. So this presentation is using rails. This time it's the same, but it's using legs. So if you see that black thing on top of the robot is actually the sensor, the lighter sensor that allows the robot to generate a map and eventually make it a tunnel. So when I was building it, there's a realization that happened. So if you look at the raw system, you'll see that we have a simulation stack for most of the robots out there. We even have a human-like robot from NASA. We have four UAVs. We have ground robots. We have for underwater, we have manipulators. But we don't really have a simulation stack for bunch of robots. So I had the idea to find if I could leave all these algorithms and make it a framework so that the community can use it. So the idea is to have a development infrastructure for arbitrary robots in a simulation environment to allow developing new algorithms and accelerate this technology further. And of course to be able to build low-cost platforms in the future. So here's the problem, right? Beijing have a fancy car. They have a nice chassis. Someone gives you a nice chassis, right? You guys have no engine on it. So that's pretty bad, right? So I think it's the same what's happening in the community now. So there's a lot of open source URDF files from the company is Boston Dynamics. The problem is it doesn't really come with an engine to make the robot, right? But fair enough, because it's a proprietary system and there's a lot of IP footage in it, right? So these visualization tools are mainly to, you know, for you to visualize a robot, but it doesn't come with the capability to make the robot, right? So the idea is to use CHAM as the main engine, right, and provide you the algorithms, simulation stack, and that's where the software is, right? And from there you can build your applications. So what is this package for? You can use it as an educational resource if you're starting to learn robotics. There's a lot of algorithms that it covers, meaning from the promotion, state estimation, and you can learn robots and you can see for it. So if you're coming from, if you want to use this tool to write your high-level applications, let's say you have a robot that you just want to, you know, make the robot to patrol or, you know, get some high-level stuff, you can do so. So it comes with a few software tools that you can use off the shelf, right? So you can navigate and it comes with commentators and it also comes with very popular robots, right? Yeah, it's one of Boston Dynamics, and it's in between. And it's also for research, right? It comes with all these tools that you can leverage on and focus mainly on the research that you're working on, right? So you follow on this history and maybe this package is for you, right? So I won't go into details, but some of the features are on chat, so it comes with a setup assistant, right, to generate the package for robots. You can have a show of hands. Any movie users, how many users move it here? All right, okay. So setup assistant is pretty similar to movie assistant where it helps you to set up, right? Let's say you have a new robot you just build or design a new hardware and you want to create an engine to make the robot, right? So Jack also comes with, so if you don't need to configure these robots, so you can easily use these robots off the shelf, okay? Comes with a local motion controller, right? If you want to do high-level applications, so that idea is not for Jack to replace Boston Analytics Engine or be the state of the art, but to provide you an infrastructure and comes with a local motion controller for users to easily use the whole stack, right? Okay, so how's the workflow like? So if you're simulating it, so maybe you start out with a URTM, right? And then you generate the perfect package using the movie, I mean, the setup assistant. And then from there, you use Gazebo to find you in the PLD constant, right, on your actuators. And then from there you can start your simulation work, right? And then if you're building a new robot, you can use whatever you've learned in the simulation and you can start your development. So these are some of the things that are generated on the setup assistant, okay? So it helps you configure your simulation work and as well as you can do mapping, you can do navigation, and as well as the pre-configured files to do all these capabilities and all the semantics of the robot. So I'd go into details into this, but so this is the setup assistant. So you can either do, tell the assistant the namespace of each leg, right? So what this means is that the left front leg is using an LF namespace and whatnot. Okay, so yeah. So you can either tell the namespace of the robot or you can manually tell which leg, which part belongs to each part of the leg, right? So for instance, this is the left front leg and give upper legs and then you just tell them which legs actually belongs to those parts of the legs. And then with that, you could actually generate the configuration from there. Okay, so here are some of the examples of the robots that have been pre-configured in Gazebo. So we have some of the popular ones. We have a spot from Boston Dynamics. We have Animal C and we have MIT miniature here. All right. So here's Animal C in Gazebo after being configured in the setup assistant and it says hi. So here's the high level architecture. So the controller accepts two types of inputs. You can control it using your velocity inputs. You need the twist messages or you can control the whole body force of the robot. So the idea is to have a high level approach in the control system. Many of the controller, the hardware interface and the state estimation is pretty decoupled from each other. It allows users to actually use their own components if needed. So for instance, you want to have your own controller or you want to generate an MTC-based controller and you can do so. You just say you're working on a really possible learning-based controller, you can do so as well. So you can use the infrastructure and train your own agent to do it and deploy it on Jamf to actually make it work. So the hardware interface changing from the physical robot to the virtual robot is instead of changing the launch files. So it comes with the hardware interface and also physical plugins to actually generate, I treat the robots. So it also comes with the state estimation where it allows you to calculate high level velocity, the current pose of the robot and it does that reckoning as well. So to know where is the position of the robot from the orange. So that is, with these capabilities, you can leverage on existing boss packages and take the robot as an usual, like a normal base. So the idea is that these capabilities are transparent to the user. And so you can take it like as a normal mobile base, or like a total one. And then you can use movements, localization, step onto these capabilities. So for the control inputs. So for velocity inputs, it has three degrees of freedom. You can control in X, linear X, Y and as well as the angular axis. And for the body pose, it's a four degrees of freedom. You can change the height of the robot. You can change the roll, the pulse, the yaw and the roll pitch and yaw. Yeah, so here's an overview of the paper. This is based on MIT Cheetah one. So the idea is that this is an IRICAL controller. So the body controller, which accepts the body pose input from the user. So in a nutshell, basically this allows the robot to do the translations of the NF actors, mainly the legs, which is used as a reference point for the trajectories. So next we have the base generator. Mainly decides as to which time each leg has to be on the ground or as to when it should be swinging open there. And the leg transformer basically calculates the trajectory of the foot, the rotation and then how far is it. So if you look at it from the bird's eye view, I just have this one. Next is the foot planner. And next is the IK engine. So the IK engine supports different types of form factors. So you have conventional one and you have an X type and a few more form factors. So for development, there's also a visualization tool that helps you debug the end effectors. So this is one of the example applications done on Boston Dynamics spot where it does a chicken head. So it does hit some of the singularity here. That's why there's a lot of artifacts on the robot itself. But the idea is actually show the ability of the framework to help you debug and reduce all this realization. So how are you able to integrate this into an unreal hardware? So it's pretty much similar to how you would do it in the Ross Patrol interface when you work on a Twitter, on a ground robot. So the idea is that you have your hardware here. And then the framework itself has templates as to how to publish all this data. So all you have to do is actually write that onto the encoder or the hardware APIs and just say this into the template, templated publishers as well as the subscribers or the actuators. So this topic actually publishes the target trajectory of each actuators. So the idea is to get these angles and use the actuator API so that you can eventually move the robot. So it's the same for the hardware. So using the same topic name, it uses Casigul Ross Patrol to actuate the transmissions in the virtual world and as well as it generates the joint states as well for you if you want to use any for feedback or state estimation. So for the state estimation node, so for those who are not familiar, the state estimation is basically the way you want to get some sense of the position of the robot or basically create more situational awareness of the robot based on the intrinsics or the data that's being produced. So given the full context, full context is basically whether a certain food is on the ground or not and the joint states is able to calculate the current pose of the robot and as well as the speed of the robot. And it uses all these data using a common filter with the IMU to have a model of us data in the automatically, not just automatically message type. And it also generates it and also it sends this to the TF to for more users. So these are the topics that's being published and subscribed. So if you take a look at the blue highlighted topic names, this is a common copy, it's when you are working on an autonomous robot. So if you have a DoloBot, this is pretty much what you would be seeing before you make the robot autonomous, right? So having said so, this has a capability to make it autonomous. So this is one of the demo using MIT Chita, doing an autonomous navigation. So it requires you to create a map and they use the map to get the robot localize itself on that location. Right here's another open source hardware using chat. So even a map can actually send a code to the robot. Just that way. Okay, so here's the open source project based on chat. So this is a research work from Manus. So the idea is to get a much better robot to pick up from a conveyor belt and move it into the second floor and be able to climb steps. So here it's starting to pick up the object from the conveyor belt, sit up and move around. So how this works is it uses a chat boot banner and then uses another open source package called green map to actually calculate the costs based on the depth data being generated from the sensor. So this is a cost-based heuristic as to find the most problem where the safest place where the boot can be located on a certain time. And then from there, it sends it to the IT engine to eventually make the robot move. So in summary, chat does a set of assistant that you can use and comes with ready to use controller to be used for your experiments. The idea is to not the best main controller, but to actually create an infrastructure for you to generate your own controllers. This is all software tools they can use. Ross compatible. Right now it's a Ross one, but Ross two is in progress. So contributors, we really appreciate that. And yeah, it's fully autonomous. And that's all, thank you. So if you wanna reach out and you wanna be contributors, just pick me in any of these streams. Thanks.