 First of all, thanks so much for attending this session today. My name is Alex Huquero, and I'm responsible for the solutions architecture team in AWS, working with customers from Canada, Latin America, and Caribbean, and their journey to the cloud. And today, the topic is about robotics. And the idea is not only to show you some examples from Amazon AWS customers about how to use robots, give you some insights about how to deploy it, but also show you how you can build your own robots, especially working with open-source technology. Today, I'll talk more about open-source than AWS by itself, because I want to be sure that you have all the foundations to build your own projects. And how I'm planning to do that? Basically, answer three questions. First is why. Why is it worth investing your time in robots, especially because in the last five years, the amount of innovation in that area is quite high, and I will share a little bit more about it. Second is what. What kind of technology is available today in open-source? You see there's not one single technology, but actually the combination of several open-source projects that can combine to develop your own projects. And the last one is how. And how probably you'll be the session that I spend most of my time, because basically I will build a robot with you for you to see all the major steps about how things work when developed at scale. That's the plan for today. And let's start, and it's time for robotics. And actually, robots, in general, is an imaginary of people for a long, long time. I just found this picture from Disney, from 1937, we show robots and people working together. But when we think about reality, the reality for robots starts happening especially around 80s, when you have a very proprietary robots working especially in an industry, doing very specialized activities. I think the good is they help in terms of productivity, especially for cars, as an example, but on the other hand, those robots was not much flexible. What has happened in the last five years is that those robots became way, way smarter, especially when you combine robots with technology like artificial intelligence and machine learning. And what you can see here is what I have in Amazon Warehouse. It's a combination of first, very flexible robots that able to adapt depends on the situation. In that case, you see that robot arm is able to catch bugs from different size, different shapes, and then establish a communication with other robots to have the synergy between all of those. It's been possible doing those things without combining robotic and machine learning. The good news is Amazon is not alone in terms of deploying robots, and this one is one of examples from one of our customers in Europe, the company's Leah, what they create is what our assistant robot, which helps people, especially people who need the most. People like elderly people, sometimes they cannot move. People sometimes they have some pathological condition in terms of healthcare, and you cannot stress enough how important it is to have those robots to help to scale people, because sometimes the treatments are very expensive, and Leah definitely can help on it. The benefit of Leah is about a robot with about 7-2 sensors, which is able, for example, to stop a person when they see some condition hazard, like this example here. Sounds simple, but it's not because there are sensors everywhere to identify those differences in the environment. Another great thing about Leah is the fact that they're tracking and collecting information all the time, because you have the hardware first layer, didn't have the operating system, that I would say in 99.9% are Linux, and then you have the middleware, which is ROS, providing all the interface for all those devices. And then on top of ROS, you have your application, the one that will take decisions, or will send all those commands to the ROS to take the actions. Today, ROS works in a very similar way compared to Linux, for example. So in terms of ROS, we have several distributions. Today, you have two major versions of ROS, ROS1, or we call it ROS only, or ROS2, which is more new. ROS1 is still be the most popular one, one that has more documentation, has more packages, but have seen slowly, slowly people moving and migrating their projects to ROS2. If we're starting in the ROS world, both options are feasible. Sometimes people tell us that ROS1 is easier because there's more documentation available, but ROS2 does a lot of significant change in their architecture. Today, I'll focus on ROS2 only. In terms of distribution, ROS1, the most popular distribution is melodic, is the first design that can see there. And for ROS2, the most popular, not the last one, but the most popular is a foxy. The version which works with Ubuntu, macOS and Windows. Oh, just there's a link in the bottom that there's information about all those distributions if you're more interested to see that. And why ROS is relevant? First, because it implements several design patterns that usually you can implement if you're not using that mid-ware. Which means that if you want to deploy a project without ROS, it's totally possible. And actually, there's a lot of people doing that, especially if you do something, I would say, for fun. When you talk about projects, especially more robust projects, the mid-ware is critical important and ROS definitely is one of the great options. ROS comes with several tools that helps you to debug your application, to analyze information remotely. And then a critical part of ROS is the integration simulation. And I will talk a lot about simulation today, but think about having a project and deploy it directly in the robot, sometimes dangerous. Like as you remember one time when I did it, and the robot just go without control and hit a person. Sometimes, damaging the robot. Sometimes it's expensive, this one. Which means that in a robot's world, it's critical to have a simulation to test your commands or your logic first, and then you deploy it in a physical robot. And the last, the community. I think the ROS community is quite active. ROS initially was developing Stanford, then developer Willow Garage is a component not available anymore, but then growing, growing more and more, the company responsible today is Open Robotics. This community is more than 10 years designing and contributing for this framework, which gives a lot of flexibility about options and different integration with different vendors. That one is a ROS 2 architecture. So if you're familiar with ROS 1, great, let me provide you some insight. If you're not familiar with ROS 1, it doesn't matter. This one is the one that's available today. So the biggest difference between ROS 1 and ROS 2 is what you could see in the middle we call as ROS abstraction middleware. Those basic ROS implement a new way to communicate things internally. Those things now define names we call as nodes. Nodes are the one that responds to process that information and this integration helps about how to move messages from those different nodes. One big difference from ROS 1 from ROS 2 is the fact that now ROS is a multi-platform. Before that was Linux only. Today you can have your development environment in your Mac or Windows as well. For production, again, Linux is the most common one for the several reasons that goes beyond this presentation. In terms of APIs as well, a lot of change. There's a lot of compatibility with the previous version, but you can see here that the package was changed. Today is much more easier using the ROS middleware. Those three are the most popular one. Actually, two are popular. Java in that case is not super popular as much as C. RCLCPP is one of the most common in robotics world as well as RCLPI which is the ROS library to work with Python. Python and C++ are the most popular one in APIs in terms of development with ROS. Now go to the fun part. Now it's time for building. Let's see how those things all work together. And to understand better, let's provide you examples about those three major steps which is development, simulation and deployment. It's starting by development first. The first step after is to find the requirements of your robot is building your environment. You can create your environment in your own machine if you want to. You can create your environment in a cloud. Actually, it's the option that I use today, but it doesn't matter if one option or another. You have four major blocks. First is the SDK. As I said, we have several distributions. Melodic and Foxy are the most popular one, but maybe you can select something else. Maybe you can collect this one. So it's critical to pick what one makes more sense for you. The language. It's true that in the same project you can put Python and C++ together. You don't need to pick one or another. But always you have the preference and those preference you can add on the configuration file in a ROS project. Qualcomm which by far is the most popular option that you can embed in your code. There's others as well, but basically the idea is how to create an image of your project and then copy this image to the robot by itself. Qualcomm is a one open source project. You can do it. There's a link here that helps you to develop or build your own development environment. If you have SDK, there's several options today. I'll use the one from AWS AWS Cloud it's very popular SQL on DDS or if you have other tools like Atom, there's a lot of plugins that can install in your own environment. The next one is simulation. As I said, simulation is critical and important in robotics environment to simulate all those things and simulation actually a combination of several projects which includes 3D models which includes physical engine which includes 3D engine and the middleware, especially the bridge between the code and the simulation environment. Don't worry, I talk a lot about simulation environment very very soon. And finally, assets, there's tons of assets available today. Let's say for example that you want to create a self-driving car. For example, there's a very good data set from Ford AVData.for that provide a collection of data from several simulations many of those from real cars. So instead of starting from scratch, the best practice is try to get those assets to help you to accelerate your environment. AWS, specifically, we have a site on GitHub there's a lot of assets as well. With those four components, again, you can create by your own. Today, I'll use one, two from AWS that combine all those things. Just a disclaimer, today the presentation is not about the AWS tool, it's about the concept by itself. There's several presentations which go deep in terms of the two from AWS which is RuboMaker. For me, I just use the tool today to put all those things together in a way that you can visualize the process from the beginning to the end. Cool, going back to Ross and the concepts there's several concepts in Ross but four of those are critical important. First, the node. So imagine here today that I just as a robot see a person with my friend like a huge example. Oh, I know him. What happens? My node which is my computing unit is able to identify the images saying, oh, it's my friend. And my node vision is response to send information from my other node I call it a hand to wave him. Hi, how are you? You see it's a combination of different nodes node head, node hand talking each other. We have also topics, topics think about all my nerve system we send information from the node A to node B. So, the topics are response for that. If you work with a three-genius integration like Kills, is the topic the same? Yes, it is. First thing that happens is the launch. So as soon as you turn on the button in a robot, they call the function as a launch to put all the nodes in the memory all the nodes that you need in the memory. In terms of Ross, we have a significant change in the system. So, in a Ross one to trust you, in a Ross one was XML only, Ross two is a Python. Doesn't matter if you're using Java or a C++, the response for the configuration is a Python code. Very straightforward Python code, by the way, doesn't, for example here, you see the yellow node with the finish of the object didn't have a package, we have different functions, because even in my head, for example, I have different functions, or better, I have different packages. For example, a package to see things, a package to hear things. Those packages are the ones that find it first. And then the code, my code specific is also Python. I invoke a Python code with the implement all the intelligence that you want. For example, imagine they want to create a machine learning and get information and pass the data to other nodes. It's how things work. And when you start a robot physically, those commands Ross two launch is doing automatically, based on specifications, but when a console, especially debugging the project, you can launch the command to put the robot memory. Let's do it together. Now going to the first part of the demo, the goal now is going to my robot and put my robot in my environment. As I said, I use AWS here, and AWS I use in RoboMaker, which serves available today. I have one environment that they create before, which is Linux foundation environment. If you want to create a new one, just click here and select what kind of distribution we want in AWS to put everything together. But for me here, Linux foundation is my environment, and I open here the environment that encode things. I have my project, and as you can see here, I have one configuration file in my case with two nodes. If I want to start my nodes, yes, finish. I open the console because now I'm debugging everything in my environment. Yep, the console here. I bought my environment, and now I will launch my project. Before launching your environment, in that case of Python, just set my bash, all my variables are available, and now, yes, now I will launch my project. Rust to launch my project. Rust to launch the name of the package, and then the configuration file monitoring launch, which has the function that you need. As expected, now I have some log files just saying, hey, logically, I have my robot open, and robot ready to receive those commands. We will go deep dive there soon. Now that you have the configuration file operating, let's go to the next step, which is see all those topics working together. So, basically, my example inspired by Leah, I tried to create something similar, a robot who helps elderly people. So as a part of the process, I want to have integration between my robot and my smart home system. So let's say that my robot is at home. It's my rival's home in the time that is not expected like in the middle of the night. I'm alone. I was not expecting anybody else there. So what happened, the sense of the door will trigger you open, and that information will send to the robot to take some action. So the component responsible for receiving this message is the node. So I have here a node check doors that the first node that you saw before. What happened here? I received this information in my case externally, but doesn't matter if I receive this information externally through the device or if my robot has the own sensor. Both here works in the same way. And what happens next, I need to talk and my component check doors need to talk with someone how both nodes can talk via topics. And send the message. In that case, I send the message saying hey, someone arrived through the front door. So the message that the device will say hey, front door. And then my robot will start to take the logic behind it. In the topic, we have several topics available. One's available there is unexpected activity because maybe I just someone arrived for example, doctor in the time that was scheduled before. So in that case, I can have multiple topics saying if someone that is expected to arrive comes, goes to one topic is someone outside of different time arrives, goes to different topic. Let's see all those things together. And now that I have my robot I open a different terminal and instead of coding it because usually we have code with C++ or Python I will send the commands from the API to see all those things together. So in terms of middleware, you can command let me just make it bigger, good. And you have commands for example to list all the nodes that you have. So in that case here note, list and as expected I have two nodes, the one that was in the description file. Now I want to see my topics topics list and now you see more I have unexpected activity but I also have for example cmd val about velocity or distance or different coordination or I have smart home events because that case my robots go to the door to check what's going on but my robot maybe can clean the house maybe just bring some medicine to the person is the point that you have all those different topics available. And then I want to simulate my IoT device so what I do I write a message in one of those topics. So what I do here Rostu topic the same but now I will publish I will publish a message I will publish a message where in the topic that I have and now a topic and you define what kind of message is I need to publish there is a string message is an image is geographical location message how do I know depends on the interface that you define in your project let's take a quick look in the project by itself if you go here in the pub I have my code in python and you can see as part of my definition and now I'm going deep in terms of the API I can have like messages so here is a one of the package to has a message about strings or how to exchange strings each other I just translate this message here in the command so in that case STD MSGS string because I send the string and now I need to talk about the data my data and the data here is front door now I have the completed message using API that again I can go there and call the similar message I send the message and now when what's happened there's a lot of things happening here but basically the most important for you is the XYZ because basically now what my code is doing is like send a command for robot to change his position from point A to point B and you can say it's not fun because I haven't see robot moving it yet now is the time because I need to go to next level and then there are things like a simulation or the physical robot for you to see those movement happens in a real time but basically now my code is ready to move or to change my position of my robot the position I have another object which is called twist, twist is an object that works in terms of coordination in terms of three values XYZ think about three additional approach X is like straight forward Y going left or right depends on the position you have and Z is like working around or just turning around you play with those three three variables to tell to the robot what they need to do how you are doing it with code so basically here I have the Python code response for example to publish a message remember what I just did now with the API here is the similar thing as a part of the code they create a publisher they say here I want to publish a message in the topic and then you have your code here publish, presence, sensor detected that to establish all the integration with the low level API for example reading a sensor for a Raspberry Pi and then you send a message to us saying publish, message the message is going to the other side from the receiving message very similar approach just a different pattern of implementation so what you can see there is subscription means that I am waiting for something waiting for something in terms of unexpected message I will call as a callback saying as soon as the message arrives I will call a function with function that you can see here in the bottom then you have the internal communication be aware that sometimes when I send a message for one topic I can have not only one node consuming that message but instead several nodes taking multiple actions is something very common in a raw robotic world and don't forget as soon as I called I need to put all those things together and then at the time the callcon a different open source comes to play so the callcon is basically two functions first callcon build is response to check if all my library is okay if I have configuration files working properly and then prepare my code to run for example in a demo that I did before the only thing that I did is I called and callcon build is the first step and the last one is a callcon bundle that one is different because basically we have our code you have a lot of integrations with the operating system like linux they put all those libraries together for example let's say that I need to install some library in the linux apt-get we don't know apt-get or yum install in our linux environment callcon bundle is the one to all those things together here in my going back to my environment if I just want to deploy my code build first and you see that all those packages are combined together because as you can see I have several packages here and then the second one is callcon bundle that now we do something very similar you have seen a lot on the linux environment which is like getting all those images and put together in a single file and that file is the one that you move or push that file or actually a combination of files to the physical robot what's the bundle we will do here usually the bundle takes longer because they combine different libraries but how things work good we talk a lot about development as go to the second part which is the simulation part of our project now things get fun because it starts seeing the raws and the simulation environment is the fact that combines several devices like LiDAR critical important to map the environment or for example GPS to know where the robot is but again depends because for example if you talk about drone is a specific set of hardware if you talk about robot arm totally different set that depends on the robot you combine different devices in your solution to simulate what you want and simulation is critical that you have a dynamic physics simulator because dynamic simulator will reproduce what happens in a real life so gazebo is by far the most popular one the new version now is called ignition is another one that is actually ignition is just the new version of gazebo more recently I guess two or three weeks ago Unity from the games a very popular for video games also announced that they are able to support those simulations means it can also use unity instead of gazebo is more about is more about what kind of simulation you want to combine and actually gazebo is more than a visualization tool by itself is more than it as a physical engine they combine three major things or actually combine three major projects the first one is a physical engine by itself sorry about that the ODE is a very common open source available that combine and reproduce all the physical movements in your environment the ogre nobody calls it ogre as the ogre is the 3D engine by itself and now those things combine in the gazebo in the simulation you always have your gazebo world which changes if a drone I want to reproduce the environment with things flying if I have a submarine I have an environment to show things under the water in a closed environment like a warehouse you have a different kind of simulation that one is the gazebo world we have several files as a configuration one is the 3D models many times you have your tool to develop your tool to develop your 3D models and then import all those assets or all those mash and the environment by itself and all the information about the robot is a virtual robot that runs the RDF which tells to your code what sensor you have available it's time to go to the gazebo now I'm going to the environment my code is running I'm using the cloud here and I have here a simulation environment running which has the code that I did before I'll show soon how to launch a new simulation but more important launch so the same robot that I had before shows a different kind of tool that can interact and now I'm using a gazebo GZ client and when I click on GZ client we are able to go for my virtual environment here we are I have the room just because without mouse a little bit harder but it's possible here is my robot the one that I had before the one that's embedded in my code as you can see here he's doing nothing at this moment so what I do now is sending commands the same that I did before so in terms of commands there's several tools that I use in terms of simulation the one that I use today is RQT another open source project as well very handy so what I do now is I put like side by side my robot and then my RQT which allows me to interact with my robot so one of the major benefits of RQT is the fact that they have several plugins available and you also can develop your own plugins to put all those things together first plugin that I like to use is the one that reads the images which I mentioned a couple of times today so visualization image view when I click here as a tool that says which topic, remember node and topic, which topic I like to read in real time image raw and now I can see what the robot has been seen internally so we have a robot there simulation and now what's your code is able to receive I use a second plugin with the topics message producer and then I'll send the message again do exactly what I did before but graphically I have the topic unexpected activity and I have the expression here with the value that I want to send and now I will send front door which is the string that my code is ready to take actions on that as you can see I just type and nothing happens it's not our error, it's expected why, because as a part of the code not only about receiving the variable but also to push this information to the topic here graphically the way that I can push is just clicking this button and now you see your robot is moving, exactly the same example that you saw before but now having the robot with the code and all the configuration files working together just take a quick look in the code by itself again I show you the code and then the files responsible to the configuration are this one logs, logs, logs my environment simulation simulation RC turtle bot here we are mine is this one so what you can see in terms of configuration your URDF file is also a XML as I identified don't confuse between the URDF file which is the information about the robot and the configuration to launch nodes, launch nodes of our python here is by XML and interesting because there are several tools today that you can create your environment and export in URDS format and you just need to pack all those things together in the project forgot one step OK I get it now I see the integration but I want to see more about developing the world develops a 3D environment as I said you can create with external tool or you can use Gazebo to do basic tests let me see let me show you how those things work so in that case I'll just launch here virtual desktop because what I want to do now is create Gazebo to you to see the magic behind because I send a message from door I see the robot moving but how the environment work together where the message goes to make the robot moving I think it's the only part so now I have my Gazebo, I have my Linux in that case I have a think virtual and I have my world here my world is totally empty and as an empty world what I want to do now is creating my objects so I just put here a box my box is there but looking how interesting things can be I select my object you see that there's a blue green and red sign is the XYZ that I mentioned before is the coordination to move things now I just do a quick test I want to select my object and I try to move my object up not sure you perceive but look I move up and they went back down why? it's not a design tool it's a physical simulator although here in terms of physics I just zoom here gravity is a one of the parameters magnetic field and all the simulations are there for you to tune and create the environment that you want another thing that usually in a robot you have a very extensive library let's say that I have a bookshelf that I want to create means that you can combine several things externally and put in your project now I want to create my robot integrate my robot what I do now is edit and model editor and model editor and now I create not the sgf files about the environment I create information about my robot by itself different things so I put here different shapes I put like a box here and I also put this different object here I'll make just a small change the direction let's say that this one is a small car that you wanted to see before moving things and now I create like the chassis and here the tires not creating all here the light I'll take a while to create my entire car here but the important thing is not sure if you notice try to look in the middle of the object there's a a small error blue red and green those one are not static are dynamic when the code send information from the Python to the robot they send information from those elements how can I see those elements if I go here and create link inspector for each dynamic object which part of the object by itself I have several variables like what's the inertia inertia that I have there the pose like how to move look x, y, z so it depends on the value that I send you see how those things move at the same time wind, do I have wind or not because wind has an impact and the physics of the robot and start to play with all those things together to try to create the environment as real as possible good I create the environment finally I create the link what the link means link means that I just link two objects together because I don't send information about topics for every single device many times I combine the device for example wire or sorry, the tire and then those devices go together in a synchronous way that link here link selection is defined the combination of all those objects together now yes now I have all the elements I need we spoke about configuration files we spoke about code and now we spoke about the real object we see all those messages to take decisions together and then finally I can create the link here if I save the file is it monitor yes I want to save it I want to put this file in my environment and I create the project here Linux foundation LFrobot and then I save it now if I go back to my project my robot what I create now is an untitled file then the XML is the one that you saw before finally I have all those objects working together and the conversion between the mesh and the configuration file I also mentioned today that this session is about scale and as you can see take a while to create all those objects again because it is not the best tool to create all those mesh theme of design but it's a great tool to combine those things together but take a while to create all the environment as a best practice is the recommendation is to reproduce the environment in several different scenarios which works just in one room you want to have flexibility between different rooms and here is a one best practice for you to combine simulation was you saw behind the scenes you see that when I go to the environment I have the simulation that allows me to see everything that happens behind if I go there I go back to my environment I have a job running it's here that separates my runtime environment from my development environment as you can see the gazebo sometimes it takes a lot of 3D images a lot of GPU behind and this isolation helps to make things more stable when I run everything in my machine I think the biggest challenge when I have these models or these environments running is to just like get useless and when you split with the GPU we just had a much better performance which reproduces the real environment better good, simulation environment today I show you how to move objects from one side to another side to one point to another point but it's very important to implement something like Islam with navigation stack which helps you to understand where I am and how to make a plan on where I'm going so RQT RQT is the tool that helps on it and then go in fast because I'm a little bit behind the schedule but I almost done as well which helps going there and when I run my environment my harvest will be the plan of mapping of my entire environment here's the plan so now instead of going to a place that I don't know how to go those navigation stacks help you to move from a different place because I have three steps one, I map the environment where my robot is and the second I take actions to move my robot after that was before because now I know where I am and I know how to take all those decisions together navigation stack, visualization now is deployment the deployment, the most important thing to consider is about how to have an environment a container environment that you can deploy your code and then one thing to use as an open source is the IoT green grass that you can deploy all my code now the last slide so be aware that what I show you today is just the beginning in terms of possibilities you can create different robots like the one from Mexico the one Rumi that you work together boss dynamics, a lot of females or the one that you work in a warehouse the most important is all those amazing projects are using the same combination of steps that you just saw today just more about how implement all those open sources together I will make this presentation available and now those links has all the steps to recreate the environment that I did today sorry about going some minutes ahead if you have questions more than happy to answer all those questions and thank you so much for your time today I hope this presentation was useful