 Yes, thank you for letting me have this talk today. I give you 30 more seconds to read the comic because it's essential to what I'm going to talk about today. So too small? OK, so in short, download the comic sometime later on. It's funny. The problem with robotics either in academia or also in hacker spaces or in your private lab or in your basement or whatever you want to do robotics is that too many people start to reinvent what is already there. Of course, there are many reasons for that. I wrote them down in the talk summary. The R piece bloated. It's deprecated, not using C++ 14 features and so on. But still, I want to make a point that if you want to go into robotic projects in your hacker space, have a look at ROS, which is well-established middleware for robotics because it will give you many things for free that you would otherwise have to develop yourself so that you can more easily get to the point where you actually want to work on. So short overview. I'm going to give a technical introduction into ROS. I'm going to show some examples of technologies, packages that are already there and can be used out of the box in most cases that do not only work. And this is where I differ from most introduction to ROS talks, can be also used with a webcam and a low budget. So if you have like, I don't know, 100, 200, 300 euros to build a robot, you can still use ROS to do it. You don't have to start with an AdMega or Arduino and program into registers again. And the last part will be about some tools that you might be missing from your custom implemented middleware that are really great to have. And you can have them if you use ROS. So first some text deserts, then some images and funny videos. And at the end, some hyperlinks. So if you do robotics, what are you actually doing? Robots mainly consist of three parts. So you have some mechanical parts, some electrical parts, and software. Today I'm only going to talk about software. There are also many interesting projects how you can build modular, reusable hardware, not only software. But today it's about software. So what kind of software is in a robot? Of course there's drivers, cameras, motors, whatever else there is. You want to connect to your robot LEDs and so on. Then there's some core robot functionality. You want to be able to do path planning, motion planning. You want to do some kind of image processing, 3D reconstruction from stereo cameras, and so on and so forth. This is what I call core functionality here. Then usually your software does not run the way you want it in the first place. So you need some debugging facilities, also introspection. Your robot is doing something you did not expect. Instead of just hitting the emergency stop button, please always add one to your robots. You can have a look what is currently happening within the software if you did not build a large binary blob that you are flashing onto your robot. Then you need all kinds of algorithms. And here is where most academics work on like myself. You want to do, I don't know, motion compensation in cognitive surgery with robots or whatever. In any case, you need some very basic algorithms. Otherwise your robot will only move around rather randomly. So if I want to make my robot grasp this microphone, I need to do motion planning with my many degrees of freedom arm. I need to do grass planning. I need to do collision avoidance and all of these things that are really hard problems. People are working on them since like 30 or 40 years. They are not completely solved yet, but they are very good open source implementations that solve it for the usual cases. And the usual cases, fortunately, are the ones one usually encounters. That's why they are the usual ones. Then once your software gets large enough, you will not be able to run it on a single machine anymore. May this single machine be some embedded device like a BeagleBone or, I don't know, a CtecBox or what all other arm boards there around these days, not even an i7. You usually want to be able, if the computation demands it, to distribute your robot software to multiple computers that are connected via ethernet, for example. And all of this is already built into the core of ROS. So everything I'm going to show today can be run on my single laptop. Most things can be even run on something like the new Raspberry Pi, but it can also be started on a cluster of computers. So if I have like five desktop machines available, I can put the camera processing on one machine, the path planning on the other, and it will work together the same way as if it would be running locally. And that is completely network transparent and happens during runtime. Then this is also, if you have multiple machines, you know that it's not as easy as compile, run it because you have to distribute it to the various machines. You have to start the components on various machines. So deployment and orchestrations also something you need to take care of once you have more complex robotic software. And of course, it's taken more or less care of already. So what do you want to work on? Usually this. You want to build a cool application. I don't know. Have a robot that gets the pieces out of your freshly built 3D printer so that it can continue printing instead of you having to go there, take the part and click on to print next one. But where you usually end up working on is the very low level software. Get the motor to turn. Get the motor to position itself at a certain degree. Get your camera running. Get your camera data into OpenCV, back out of OpenCV. Create point clouds, use the PCL and whatever. So you are doing a whole lot of plumping work when you are working with robots. And this does not have to be the case. That is what I want to show now. So what is ROS? ROS stands for Robot Operating System. It consists of four major parts. One is the middleware together with tools. Never underestimate the importance of having tools available for whatever you're using. Then the basic robotic software distributed processing of coordinate transformations. So for example, if you have your robot and you want to tell it, move five centimeters to the left, you usually specify that within some kind of coordinate frame. So either it is the coordinate frame of the robot. It is the coordinate frame of my laptop and so on. And in the background, you do just some metrics, multiplications to get everything right. Perhaps you remember your algebra classes or not. It does not matter because here it's just an api call. Give me this position in this coordinate frame and it will do all the magic algebra in the background. Then the third and I think most important part about ROS. ROS is not better than any other middleware that was built and there were hundreds of it in the last 30 years. But it gains such a big momentum that a large ecosystem formed around it. Kind of similar to Linux perhaps compared to the BSDs. Now don't fatter me for that. But the important point is you can take ROS today and install it, which is easy app get on Ubuntu's and then have a large collection of software available that I will be showing shortly, what is already there. And these are the three main components that make ROS a great tool to use for robotics. So now I'm going to describe some of the technical terminology used in ROS. So first there's the ROS core, which is the only centralized part of ROS. So it's a well-known instance like your name server, for example, and it can point you to the other instances within your robot network. Notes are any process within your robotic network that implements the ROS client api, which is usually used through ROS CPP or ROS PI, which are the main programming languages used to interface with ROS systems. There are also many other interfaces, even for MATLAB these days, even officially from mathworks for MATLAB these days. So it really got important obviously. And these support all the nice features I'm gonna show till the end of my talk. So initially you start out with an empty computational graph. So basically we are talking about notes and edges that connect these notes. So the note is some kind of process on an operating system running on any machines that are connected to your ROS network. And then there's the central ROS core running on one of these machines. So initially there are no connections, we have just started the ROS core. Then we add a node A, and A wants to publish some information under the name some topic. Either later on or previously timing does not matter here, node B comes along and wants to subscribe to this some topic. And so it also tells the ROS core, please tell me who is publishing information under this name. So it's the classical publisher subscriber architecture for those people who read Tan and Boom. The ROS core answers with the IP address and port of this core. So it's all based on TCP IP. There's also UDP implementations, but the standard is TCP IP. After node B knows about how to reach node A directly, it will contact him. This is all XML RPC. And from there on, node A will transmit only the bulk data to be whenever there's something new that it wants to tell its subscribers about. So as you can see the ROS core only gets involved into setting up the connections, but it's not the central bottleneck in the network. So not all of the information is pushed through the ROS core, which is quite fortunate because if you have a couple of connect cameras, for example on your robot, each one saturates your gigabit link. So you really do not want to add artificial bottlenecks to. And the nice thing about this architecture is that on the same topic, any number of publishers can add information to this topic and any number of subscribers will be informed once there's new information available. On a technical level, this is done at the moment with Unicast. So you have multiple copies on your network, but in May of next year it will also be able, or at least that's the plan to use IPv6 multicast. So then it's even efficient what you're doing here. So topics I already introduced, those were the names under which information is published. So it's just some way to define where to find the information. And it's built up in a hierarchical structure you'll see shortly. So you can kind of imagine it like your L's in your file system, your file, whatever funny cat video.avi is located in some directory and you can refer to it under that name. So these are the names in ROS, but they are not locally, but used within this ROS network through the ROS core. A topic is a unidirectional connection. So you publish information and people subscribe to it. So it's not bidirectional. So you do not get feedback on who's subscribed to it and you don't care about it. You just publish it. And if there's at least someone who's interested in it, this note will get the information. There's another kind of IPC inter-process communication mechanism that are called services that are your classical remote procedure calls. So you call this name, pass some parameters to it. It will be serialized through the network. The destination note will do some kind of computation. Answer with reply. And afterwards, this kind of communication is ended. So it's not intended for streaming data. It's more for like, please reinitialize yourself or please do this calculation for me or get me your current set of parameters. And the good thing about these communications mechanisms is that they're kind of complementary. So you have your topics for streaming data, sensors, think of a camera, or your robot, which publishes its current position of all of its ankles and joints. The problem with it is that messages may be dropped due to overload without notice. So, but if you think about it, if you have a camera publishing information each 33 milliseconds, like at 30 frames per second, you really want the frame to be dropped instead of a queue building up if your Wi-Fi is too slow and you go further and further back into the past and get a large latency. So that is why it was built with loss in mind. Services, on the other hand, are blocking calls. So you call a service, get blocked within the current node, will be transferred to the distant node until the reply is received, you are blocked, which is nice to do some kind of tasks. And then there are also actions which built on the first two mechanisms and there allow you to deal with long-term tasks. For example, if you tell your robot, move from here to here, you really do not want to have that as a function call, like in your programming language, because it will take time until the robot goes from some point to another point and you might want to get feedback in between as well. So with actions, you tell your robot, go there, and then you are unblocked when this request was received by the other node and then you can get intermediate feedback and in the end you get the result, for example, okay, I reached a position or I collided with your cat or whatever. And all these kinds of mechanisms are used in what I'm going to show further on. The last terminology I have to introduce are parameters. So the ROSCOR also acts as parameter server. You can kind of add, it's a bit similar to REST RPs. You can just add names there with some data attached to it and also with various kinds of data. So not only strings, but also arrays, whatever you can serialize in YAML. And this is usually used to do distributed parametrization of your nodes. Imagine you have the classical solution of configuration files and you want to run your node today on a different machine, then you would have to make sure that your configuration file on this machine is up to date. With the parameter server, whoever in the network starts this node pushes the current set of parameters to the server and all nodes, no matter on which machine they're running, uses these parameters to set up their algorithms, for example. Okay, I talked about names for everyone who ever used URLs or perhaps a Unix OS, it will be quite familiar that you have absolute paths that begin with a slash and you have relative paths that are not anchored kind of at the root of your system. And this is quite nice because you can, at runtime, tell a node, okay, don't publish your topics at the kind of the root of your namespace, but push it one space down, like kind of moving something to a subfolder if you have relative addresses, everything will continue to work within that subfold even if you push it down. So for example, if you have two webcams, you simply start your driver node twice in separate namespaces, for example, camera left and camera right, and the software does not need to know about that it currently is being used together with another camera, that is just a runtime decision when you start your nodes. So a small example, here we have three nodes within our network and our ROSCOR, for example, we have the U4C camera node publishing images from some webcam. We have some processing node and thus, for example, an edge filter, edge filtering with the images, and then we have our RQT GUI which displays these images for the user to look at what the rover is currently perceiving. So we have here our parameters, we have our nodes, we have our services that can be used as RPCs, and we have our topics where streaming data is moving. Now we have all kinds of tools for all of these things I talked about, there's a command line tool that allows you to have a look at what is currently running into your network. For example, if you want to find out all the nodes that you have started on your various machines, you just type in your command line ROS node list and you will get the list and you can get additional information on each node, the same for topics, services, messages for the different data types that are used for the parameter server and also there are 3D or GUI and 3D tools you can use to view the current state of your robot, you'll see them in a short bit. So just here to glance over, these are two code snippets, one for Python and one for C++ to publish something on the ROS network. So it's quite simple, you initialize your node, you create the publisher, you fill your data type with information and then you call publish and you don't have to do anything else, then it will be already available. If you type ROS topic list, you can already see a topic as a new topic available and if you go to the next slide, you can write your subscriber with a few lines, you create the subscriber, tell it what is the name of the topic it should subscribe to, what data is being sent on it, provide it with a callback, so this is event-driven and afterwards this callback will be called whenever some node on the network publishes something under the name a topic and that's about it, you have to know to use most parts of the middleware of ROS. Okay, yes, ROS has its own build systems, perhaps all large software frameworks have come up with. The good thing is it's only a very thin wrap around C-MAKE, so if you have ever worked with C-MAKE, you will be fine, otherwise learn C-MAKE. But don't worry, this is really no game changer, it's just you have to look at what is written on this slide, this slide and this slide and then you are in C-MAKE world again and it will take care of resolving the dependencies between the ROS packages you are using that they get built in the right order, so it actually helps to work with it, also it's of course annoying, but I've never seen a build system that's not annoying, so no change here. Okay, so now I've talked about what ROS is in technical terms, so what do you need to use ROS? Do you need always an i7 PC, perhaps you don't have one on your quadcopter or on the International Space Station, both places where ROS is currently running? No, you simply need anything that can run Debian, Linux, or preferably Ubuntu, so I'm saying Ubuntu 14.04 here because that's the easiest way they just type so the app get ROS, whatever, and it's installed. You can of course build it from source and all other platforms, but I don't think you want to. It has a couple thousand dependencies. So what kind of machine do you need? Preferably some powerful desktop or laptop machine so you can get started. A good solution if you are power constrained is to have a small machines on board of the robot like a Beaglebone or a Raspberry or an Intel NUC, and a faster machine of the robot that does the heavy processing, but you can always, as I said, use it locally and you don't get a large performance penalty. But it depends on what application you're running. Of course, if you have like four Kinects that stream data and you want to do point cloud processing on them, you can put some desktop machine into your rack for each Kinect because that is just computationally intensive. If you just want to have a mobile robot like a Roomba moving around, it's probably enough to strap a Beaglebone to it. Okay, now let's come to the part with more images and perhaps more interesting if you are not yet sure you want to use yours. So for all the usual kind of camera sensors you find on a robot like mono cameras, for example, a single webcam, stereo cameras, for example, two webcams and RGBD cameras, for example, a Kinect or an ASUS action or Xeon or whatever you want to spell it. You need drivers for all of these. You need to calibrate them intrinsically and extrinsically. You want to visualize what's currently happening and you need processing so that your raw data do something useful for your application. And now I want to go through all of these steps for all three camera types very quickly. I'm not gonna talk about object recognition here. There are some raw specific frameworks to facilitate object recognition, but basically they come down to using OpenCV and the PointCloud library. But still it might be easier if you want to do object recognition to look what is already there in ROS, what packages have been built by others instead of rolling your own from the onset. So let's define a launch file to start my webcam. Launch files are just a convenient way for deployment and orchestration. It's just an XML file starting with the highest level tag launch. Then you can add any number of note tags where you simply tell ROS from which package, what to load, what process or what binary. You can give it some name and define some parameters. For example, here I said namespace cams so everything will be pushed down into the subfolder camera so that I don't have a whole mess of topics at the top level namespace. And then you add some parameters for specific, of course, to your note, the resolution of the camera, frame rate, disable autofocus and so on, and that's about it. Then you type ROS launch. I prepared screen capture videos for this because live demos, as we saw in the previous talk, are always too dangerous. So let's have a look at this. I hope it works. So I started my ROS core. This is all on low close. I'll start the launch file, you just saw. Then I run the RQT GUI. No, I'll just, okay. First, I want to have a look on the command line. I'm perhaps, I don't like GUIs. So let's have a look at the command line, what nodes I've started, what topics they created. That's all good and all nice because you can also get more information about the topic, number of publishers, what type the topic has and so on. But I just started a camera node. So I might want to see the camera images. That is where I start the RQT GUI. Here I select as a plug-in visualization image and then you can have a drop down menu which gives you all the choices of topics that carry sensor messages image as their type. So these are normal two-dimensional images in ROS and I can just select that and have a live view of my camera data. And this is during runtime. So I did not have to recompile my camera driver node adding debug output. For example, OpenCV image show. I just can at runtime start my GUI and it will connect as a subscriber to my data stream and also get a copy of the information all other nodes keep on running. And this would have been working exactly the same way whether I started the camera locally and the RQT GUI locally or one of those remote and the ROS core on a third computer. Don't ask about security. ROS was not built with security in mind. So the first thing you should do when you start to use ROS is do pseudo IP tables minus flush because otherwise you will have very weird connection problems within your ROS network so you might not connect it to the internet during that time. Okay, now we have a camera who has ever worked with image processing here in the room. Perhaps raise your hand if you've ever done some kind of image processing. Okay, so then a short reminder. If you have a camera you need to have a mapping between the pixels and the outside world. So if you just have the image you do not really know a relation between pixel values and yeah, SE units. For example, you do not know how large is this object five meters away in pixels if you do not have an intrinsic calibration of the camera. So intrinsic calibration gives you a mapping between the outside world in terms of meters to what you get from the camera in terms of pixels in a very simplified manner. There's a utility for that. It's using OpenCV but if you've ever wrote your own OpenCV calibration for camera software you know that there are many weird corner case and so on and it's really nice to simply have a GUI for that. I'll show you the video and then. So basically we have everything running from the previous screen capture. Now I additionally start this calibration node which will give me a GUI. And now if I update on the left hand the plotting of my ROSCRAV which is also very nice introspection utility that comes before ROSC. You can ask ROSC to plot itself. So to tell you which nodes are running on which machine who is connected to whom what amount of data is being transferred in between and so on. So now that I started this calibration node and I update this graph it appeared here as a node subscribing to the image of course of the camera and outputting to the ROSC out. Okay then you do the usual thing run around in front of your camera with a checkerboard. Don't have to watch that. And then in the end you get this screen where you are told whether you have enough poses of your checkerboard captured which OpenCV does not tell you by default. So it's again a nice tool to have available instead of writing your own. And then finally if everything is green you hit calibrate. It will do the intrinsic calibration using OpenCV and you get the camera metrics the distortion coefficients the projection metrics and so on. So for those of you who know what it is fine for those that don't doesn't matter you can have your camera intrinsically calibrated and you can use that downstream. And it will also automatically start a topic that's called camera info and there you can get the parameters of your camera and this is used in all different parts of ROSC wherever you need the mapping of cameras to external values. What is also written down here on the command line with the colon equals sign is runtime name remapping. So the camera calibrator pipe if you look into the source code it subscribes to the topic named image. And now I would have to go about and change this string in the code in order to change to camera image row because that is what in my ROS network on my robot the image is called or what is the topic under which the image stream is published. But the nice thing about ROS is that you can do these remappings during startup or even during runtime where I just tell this note okay whatever is called the name image in the ROS network for you now is camera image row and everything will start working. So that's really nice way of flexibility in your software and also how you can write quite modular meaning loosely coupled notes because they don't have to know about each other upfront. They just talk to each other using standardized formats and the namings, the topics on which they talk you can specify at orchestration time. Okay then once you have your camera calibrated you might want to do some post processing for example if you have an industrial camera usually it does not output color images rather it does output grayscale buyer images and you have to reverse the process and so on and for all those low level processing steps there are notes available you just start them and they take the raw data as input and output the process data and you can connect your higher level algorithms with these data streams. If we want to write our own processing node on ROS so we want to take in an image as input data we want to apply some open CV filter in this case an edge filter and we want to output this data to our ROS network perhaps to be used as input by a higher level processing node. Furthermore we want to be able during runtime without restarting anything to change the parameters of this filter we are running on the input images and this is called dynamic reconfiguring ROS I'll show you a video how all of this works together. So I'll start the node I programmed if I update again my ROS graph it will show up as a new node subscribing to the image topic and now I have available a new topic with edge filter applied to it and then I can start in RQT GUI the dynamic reconfigure plugin where I get all these parameters I defined as being dynamically reconfigurable in my algorithm and I can play with them during runtime and this is all something out of the box available you don't have to sit down and write a library that can do something like that for you again and this is really nice to get to quickly get to something that does something interesting from a point of view of the application and all of this takes just 36 lines of C++ code I think this is quite few code in order to have something that's network transparent that can be dynamically reconfigured remapped at runtime and so on so it might be better than linking libraries to each other to change something okay so we had mono scopic cameras now let's do stereo view like we humans we have two eyes so we can process the disparity for example I see this microphone more to the right of my right eye than to my left eye and from that I can triangulate the three dimensional position of this microphone and also you can do that with robots and you usually want to perceive the world in three dimensions in order to figure out where can the robot move along to do object detection in three dimensions and so on so again we write a launch file the note this time is called UFOZ stereo note those are all available we again add some parameters to it this time we have two devices a left camera and the right camera define which webcam is attached to which Linux video device and then we can start it we have this note it publishes two topics under those two names we can again start RQT going have a look at them if you want to do stereo calibration which not only does the intrinsic calibration but also correlates what the right camera sees to what the left camera perceives at the same moment and also does synchronization and some other algorithmic transformations of the images and there's also this calibrate the note for it pretty much looks the same way as previously for the mono camera only with two images okay once we have our stereo camera percalibrated we can use the stereo image proc to reconstruct the three dimensional curled image from our two cameras and all of that this takes is a launch file with about five lines of XML in addition to what we wrote down to start our cameras and this note also has available a dynamic reconfigure GUI in order to change the stereo reconstruction parameters so if you get into stereo reconstruction you know there are a whole lot of parameters you have to set you can do that during runtime and don't have to write a single line of code for it and then the result are point clouds and point clouds are really nice format because there's the PCL library available which does what OpenCV did for image processing for point clouds so a really large close to state of the art library with all kinds of algorithms that allow you to get with only a few lines of code to something that's applicable to what you actually want to do instead of writing low level take this point compared with the one to the right and do something okay then the last kind of cameras are the Kinect style cameras unfortunately Open the night was bought by Apple and they closed it down which is a bit unfortunate because those devices replace in academia research for I think 150 euros replace devices that cost about 20,000 euros and they're good enough compared to laser range finders and so on and to start this device have the point cloud available you just type in this line at the command prompt you start a launch file someone else has written for you and then you got a point cloud you can process in your network so let's have a look at this so this is the RGB image from this ASUS Xeon this is the depth image which is all quite nice but what you really want to have if you get a point cloud out of something you have a look at the point cloud and interactively and this is what you can do with Arvis so just start Arvis at this point cloud plug in to it tell it on which topic to listen how to do the color transform and then you have three dimensional point clouds introspectable by the human developer who needs to debug what's going wrong at the moment and writing something like this fewer by hand I can tell you takes you more time then you plan to spend on your whole hobby robotics unload my printer project so perhaps I have a few minutes time to trying to do this live so let's start our ROS call let's ROS launch I'm not sure if it will be working because my laptop has not really enough power for this but we'll see okay that's so now I add this plug-in that's specifically for point clouds and of course it's perhaps not working I won't spend too much time on it perhaps let me have one look at the driver side of things ah okay I forgot to check register depth images ah there we go so perhaps I should look at something that's not out of range of the camera ah there we go so now we have a three-dimensional view here of our wall okay and this was without writing a single line of code and also you can do all the other nifty things I talked about to it okay that was the wrong key obviously PowerPoint and connecting beamers to laptops is more complicated in robotics these days okay um where did we leave off I think that was here okay and the nice thing about it is if you have a stereo camera power and the Kinect on your robot which is a quite common setup they output the same kind of data format so you can switch in between between your Kinect style device and your stereo camera pair and this is highly valuable because Kinects are much better at reconstruction what is in the range of like half a meter to four meters even in the dark um stereo cameras obviously need light like ourselves to see anything but on the other hand they're much better at doing long distance reconstruction um or if there's would be uh sun shining into this room okay I know this doesn't happen at hacker spaces but then Kinect style cameras do not really work well so if you have your middleware setup in a way that it does not matter where you get the input from you can just at runtime switch between the two devices that feed you the data and all your higher level algorithms for example collision avoidance will continue to work uh the way they did okay now I was talking a whole lot about sensors so cameras are obviously only one example of sensor you might find on your robot there are drivers for laser scanners for laser line scanners for distance sensors for what I do not know uh at all what there is these days um there's a listing on the ross wiki page the link is at the end of the slides have a look perhaps your device is already supported um otherwise ask me we implemented quite some drivers for example for the um the rgbd leds the ws 812 what are they 8216 or whatever those leds you find a lot of in the basement we have a ross driver for that so just connect them to a beagle bone and you can have a display with ross so it's also ross does not only work nice for robots of course it is targeted towards robots but whether you are looking for a flexible middleware that does not need hard real-time constraints and if you don't know what hard real-time constraints are you might not need them so if if if best effort is good enough so if you can just check whether your data is being processed fast enough and if it is it's fine then ross is quite nice for a whole lot of hacker and maker projects okay but now let's get into robots do you want to model our robot for this there's an xml based description format called urdf you start out with the top level tag robot give it a name then you add links links are the rigid objects that connect joints together like for example if i want to model my arm perhaps i could model it as two links and two joints in between the shoulder and the elbow and then you have to give them a visual appearance this can be either geometric primitives or meshes and meshes are nice because you can create them with blender you can export them from your cat application and you can just tell here use mesh for this link and it's there and there this way you can easily either model your robot before you build it which is quite nice because then you can explore its kinematics before you find out they are wrong or you could just model your robot to do all the things i'm going to show in the next oh 10 minutes the links are connected by joints they they are different types so if you have electrical motor for example a servo motor this is a type revolute joint it has joint limits you cannot move it endlessly in one direction and you specify all of these kinds in here okay once you're done with that you might want to have a look at it and again the nice thing about is other people already built software so that you can focus on what you're interested in so i wrote down that's about i think a hundred lines along the description and now here's my very simple robot consisting of three links a base cylinder and two arm segments connected by two joints and i can immediately move them around without writing any code through this GUI and this way you can also explore how you want to build your real robot for example if you go to usual hobby assist way and use servo motors and aluminum profiles to build it okay now comes something really cool if you have a robot that has a couple of joints it gets really complicated to define in which position which joint has to move in order for the robot to get to a certain point this is the problem of inverse kinematics and usually you do that by calculating something called the inverse jacobian matrix that already sounds like a lot of math it's much more than that and you might not want to get too deep into this if you just want to unload your 3d printer right so there's something since i think the beginning of this year called the movit setup assistant and you just feed your robot description into that it creates some nice GUI where you can load your robot again you it's immediately visualized writing such a GUI again what has taken some people i'm very gracious for that they did it a lot of time to do that then you do all kinds of configuration but it only takes you about two to three minutes can more or less use the default parameters if you have a robot with multiple arms you define which kinematic change are planning groups and so on and in the end you store all of this this is very well described so this talk obviously is not a tutorial and you can just look at the website it's really easy to figure out okay afterwards you have your kinematic description of your robot and you can use this they provide a demo.launch file for each robot you have exported from the GUI i just showed and with this you can do something amazing out of the box you start the demo launch file it starts harvest for you there's a robot and you can do immediately do motion planning with it with various state of the art sampling based motion planners so for example i just told it go to some valid position and now it planned a path to go there of course this seems trivial with such a robot but if you have a robot with six degrees of freedom or even seven degrees of freedom you really can't do that in your head it's very very counterintuitive what these joints are doing in order for the robot to simply move along a straight line in our Cartesian three-dimensional space and here you get it out of the box and the nice thing is this is now just a dummy robot but if you would connect your servo motors to a controller do some low-level setups so that you can specify what number do i have to send where so that your servo moves into this position you can immediately use this GUI to move your real robot to do path planning with your real robot and you can even put a connect next to your robot and do collision-free path planning using the movit framework so this is really great that this is available nowadays that took i know of many projects in academia that took like a year just to get this basic motion planning running stable and reliably with their robots and now you get it like in 10 to 20 minutes which is then a whole day if you consider debugging but that's really really a large improvement and if you hate GUIs that's fine all everything i'm showing is well separated between the GUI components and the code so if you want to do path planning not through a GUI but from your code because you recognize an object in your image of course there's a python and the c++ rp to tell the robot okay go to this point you don't need to go through the GUI okay if you want to go even further and want to simulate your real robot or your robot you might want to build some date there's gazebo for it physically based simulator that's integrated quite tightly with ross these days and there are a couple details i'll leave out you to time there are different formats but you can convert them in the end and since the ross is built the way i told told you about in the beginning with the topics the dynamic renaming and there's a mechanisms for ross to not use the real progress of time like what we call wall time usually in simulation world but you're simulated time so if your computer for example is too slow to run the simulation in real time so it is only can progress one with 30 percent of the speed the real world time progresses all of your ross nodes will automatically adapt to that so if you say sleep 10 milliseconds through the ross rp it will sleep either 10 real seconds or 10 simulated seconds dynamic all by itself you don't have to recompile anything to run your nodes against simulation or on your real robot and of course you can do all kinds of software engineering test-driven development continuous integration there are people who integrated this with the chenkin server and so on you can do distributed development in your hacker space because perhaps not everyone has the same robot available you can still develop algorithms test them on the simulator robot and then once you're back in the hacker space test it on the real one stay close to the e-button in that moment okay let's assume we attached a webcam and an rgbd camera to the robot we modeled earlier and converted the formats into each other now i started the gazebo simulator i add this robot here and now there's a physics engine in the background so you can throw objects around and if they hit each other it will happen more or less physically plausible so they're using different game physics engines like oat and bullet in the background and so i could flip over this book bookshelf if i hit it hard enough in the simulation so you can really do some kind of advanced manipulation and as i said we just added a simulated camera and you can get images from the simulated camera in the same manner you get it from the real camera which gives you opportunities to do for what i started to call robot unit testing so you can use your software one-to-one as you use it in your real robot run it against the simulated environment within run it against the simulated robot within a simulated environment and thereby ensure that if you change the code nothing breaks and nothing not only on the level of it still compiles but on the level of the robot still does the correct thing so okay here's the simulated point cloud again i have to skip ahead um debugging tools i showed you the command line tools i showed you a tiny bit of the plugins of the rqt GUI and the arvis 3d visualization now we are at the outlook so i talked very very brief about gazebo and moveit simulation and motion planning there's a whole new framework to do low level control for example control loops pid control and so on that is already integrated with frost so if you use rust control on your microcontroller to get your motor to stay in the correct position if you tell it to it can automatically interface with moveit with the visualizations and so on this is really nice so perhaps have a look at it in the beginning and then start implementing instead of the other way around and yeah of course open cv and the pcl are two talks for itself then there's also the possibility of using shared memory transparently within the same host instead of using a local loopback for efficiency reasons there's gpu processing you can do slam simultaneous localization mapping which means you put your robot here it drives around while it drives around it builds a map and localizes itself within the map it just built and this is a really huge scientific research question for the last 10 years and now we have open source software that does this reliably for indoor scenarios so basically you have your room bar stick a connect to it configure the navigation stack and you have slam which like many many phd students worked hard on for the last decade okay um of course you can integrate beamers for augmented reality like project something onto the robot okay that now i'm really out of time there's no rob so if you want to do knowledge processing artificial intelligence on ontologies all kinds of world perception world understanding with ross the people from the tomb have worked on that quite a while have a look at these two pages what people are already using ross for um ross 2.0 is coming up next year i hope this will be great the negative thing about it is it will deprecate some things and as you know if something gets deprecated the person who once wrote it might not be there anymore to fix it so there might be some breakage but they're basing ross on a much more solid middleware so it will everything will stay the same from an rp point of view from a user point of view but as the lowest level they will be using dds data distribution services where you can add real-time guarantees channel resources and all other kinds of things and the transparency between network based communication of nodes and shared memory based communication of nodes will be completely so you don't know whether you run your nodes on the same host it will use shared memory if you run it on multiple hosts it will do network serialization so this will be great if some of you will start working with ross and help make it great that would be even better so and i thank you very much for your attention i hope um someone will read the line at the very bottom if you use dross for any of your hobby projects or in your hacker space plan to use it now or just get interested find me at the congress talk to me i'm really interesting to hear what other people are using ross for or just send me an email if you don't catch me here thank you okay thank you andrea so far we still have maybe one or two minutes left for a few quick questions if anyone has a question and just come to one of the microphones okay so that doesn't seem to be the case then thank you so much andreas thank you for listening and attending