 My name is Melissa and I'm a software engineer at Microsoft. So first up today, we have Marco Gutierrez, who is a developer at Glidex. So Marco will be sharing with us, Lovebot 2.4, which is a do-it-yourself robot, which is meant to teach our robot to learn robotics and help students around the world get their first steps from you. So without further ado, hand it over to Marco. Thank you. So yeah, my name is Marco. I'm here to present the Learnbot, which is actually a work that I was doing with my university back in Spain while I was doing my PhD. I just finished my PhD and I'm now part of the Glidex team that we're working in a different field. But this is a project that we have there. We use our own framework for robotics, open source framework, and we were working on this. There's still work being done in this robot. We call it the second version, the 2.0, although there's been more versions, and it's still being improved right now. So I will actually explain a bit what's the purpose of this little robot, what's the design, how do we get to this point, and the software that we ran on it, which is our main contribution here. LearnBlock, which is a new tool that we just developed in this 2.0 Learnbot version. To make it easier to use the robot. As it's meant for learning, we developed this tool to make students able to program without having to know any specific language. It's a block-based programming platform, basically like a scratch kind of thing. And then I'll show some examples of usage. So this is actually not how the Learnbot looks like. This is more like my development version, which is just no external covers so I can just attach whatever I need at any point or change whatever I need. So this is the actual look of the robot right now. And the purpose of this robot, as I said, is to build an open platform that is meant for students, mainly like high school students, not very, with a lot of knowledge in the field, and to help them reinforce different concepts and to learn different programming stuff and have fun while doing it and all that. We do it, so it's easy to make. So the students, if they want to do it, they can actually build the robot themselves. And we select different parts, so it's cheaper to use and cheaper to make. And all the pieces are 3D printed, so you can actually do it yourself at home. And then a sign, wait, what happened here? Okay, wait, something's going on off. Oh, I think this is not working. Yes. Right, that should be good now. Alright, regarding the sign. So this is actually the fourth version of the robot, if we count everything, right? This is the first one that we did. It was just basically some... I think it was an Android, like this one, just an Android, which is like a Raspberry Pi from this Korean company. And we were using a Kinect on top of it, but we actually realized that the price of the Kinect is really high compared to the rest of the robot. So we decided that if you have a camera and then some kind of like laser or sonar thing to avoid obstacles, you can pretty much do a lot of stuff with that. So we decided to remove the PrimeSense device from the first version. This is like a second prototype version, all messy and stuff. And this is what we consider the first version that we did, which is the one I presented last year here. There was a few problems with this version. The design didn't allow us to have a motor here to move the camera. The sonars were working very well. The information, it's okay, but it's a bit limited. So we decided to make a new version. It's not perfect. It's far from perfect. We have some problems with it. We hope to fix them along the year. But it's got like a screen and it's got like some laser points attached here so you can actually avoid some obstacles. So what's inside? So we actually used a Raspberry Pi. We used to use the Android, but now the new Raspberry Pi is quite good. You can get, especially because you get like the Wi-Fi inside and you don't have to carry around it. Then you get like the camera interface that we use, the GPIO that we use for the control of the motors and all that kind of thing. So yeah, we just installed a Raspbian here and then we put on top of that our framework of robotics that we can use to manage the robot. For the wheels, we design a basic differential base. And it's basically just one of these do-it-yourself motor kits with a regular motor and wheels that you put on the side. And then we use this Polaroid driver that it's meant for the Raspberry Pi. You just connect this one to the GPIO and then you can control the motors very easily. And then two ballcasters here that will help you rotate the robot. So when you move the wheels in a different way, it will rotate. More things, sensors. So these ultrasonic sensors, we removed them. The prime sensors, we also removed them because of the price. And now we have a webcam. It's actually not the USB. It used to be the USB. It's actually wrong. It used to be the USB webcam that we actually unmounted and everything but now we have the actual Raspberry Pi webcam here. Not in this one, but in the real one we have lasers, distance sensors. So it can avoid obstacles. And then we added the 3D.5 LCD touchscreen to the robot. So we use it as a face for the robot so it can express different kind of emotions here. As I said, it's fully 3D printed. All the pieces, like even the bottom and everything, it's modeled in 3D. You can find the models in the website. So anyone can download it and print them at home. And this is actually how I power mine. But we are usually, in the real one, we are using an actual battery and they actually build the circuits for the charging part and everything. But myself, I just go with the power bank because it's easier for me to use them. That's actually something we're not so sure if we might change it in the future because it's a bit of a pain. Both of them have their drawbacks and pros. So we're still discussing on it. So yeah, so this is how basically Raspberry Pi, you connect the camera to the camera point here and then you have the driver that gets connected to the GPIO. The driver will handle the different motors and this will be the base, differential base and everything will be powered by the power bank here. You have to put some power here and to the driver. So now we get to the software that we ran inside, which is the Robocon framework. If anyone has some robotics knowledge, it's like pretty much any other framework around. We have differences with them. The main one is ROS, so we have our own, which is this Robocon thing that we started way before ROS, become popular and we decided to keep on developing it. Robocon, it's actually powering a lot of different robots and we have a lot of components. So for those of you that don't know, in robotics, we use what is called component-oriented programming. So we have different components and you would run, for example, for a camera you would run a component that will grab the image and what you do is you connect these components to each other. For example, if you have a face recognition application, you would have a component running, grabbing images from the camera and then you will have a component that is doing the face recognition. You connect the face recognition to the camera to grab the images and then you produce whatever output or someone might use that recognition for something else, like another component and that's how you create a network of components. That makes the whole system more robust to failure. So Robocon is an open-source robotics framework. As I said, it uses component-oriented programming. That's pretty much everyone does in robotics nowadays. We use this ICE middleware from CRC, which is a basic middleware that takes care of all the networking and the communications between the components. So we produce different interfaces that this ICRC will connect. So you just have to tell them where to connect and then you can just basically call this function that is in a different component and get that information and use it on your component. As a component-oriented programming, we support different languages. You can use C++, Python. Currently, in our framework, we support, because we have domain-specific languages, too. So we currently support C++, mainly C++ and Python. So we've done some Java for Android and stuff like that. We have these domain-specific languages-based tools that will allow you to manage components. So each component will have different parts. They have a generic part and a specific part. So you can actually produce the generic part using these tools. So you don't actually have to code the whole networking part and whatever ICE connections and stuff like that. So you basically focus on whatever function you need for your component to do. For that, we have these tools. We have some other tools meant for robotics in the framework that we have a simulator, a 3D simulator, so you can run your simulations there with models of different robots. We mainly have the models of our robots, but you can actually build your own. We have some tools for testing components. So you can test if the camera is giving you an image or what's the robot getting out of the laser, stuff like that. And also for recording and replaying behavior, that's another cool tool. You can actually record the outputs of different components. So let's say you have a... I have this robot that is running four components. I can just record the output of that and then replay that in my computer so I can test it later without having the robot or without having... If I'm just today here and I need to do some tests, I can record the environment, send the environment, get that recording of that and then actually use that later for some testing with the same data. This is actually an example of a robot. This is actually the robot I was using on my PhD. It's a manipulator omni-detectional robot. And this is an example of one of the tools that we have, that it's a component management tool. So these are the different components that we have in the running to make this robot to basically grab the can of chips. So you can see it gets pretty messy and it's actually really, really helpful to have these kind of tools to manage the whole thing because otherwise it'll be a pain. It's still a pain with the configurations and everything but we haven't figured it out to make it easy to use. So this is a bit of a look to what you get if you actually download Robocomp and put it on your computer. So we have, this is the main repository for the framework. It will get you the core and then we have different repositories for different component sets. So this will be the set for the components that are related to the learn bot. So basically you will get these directories and then you put whatever components you want to use into the components directory. So you have a components directory that has the components, you have interfaces here that are the interfaces that ICE uses to see what functions each component is offering to the other components. We have different files like 3D models and textures, images and stuff like that here. We also have classes and libraries to help development like we have inner model library for robotics and we also have library for displaying stuff, different kind of libraries. Tools, the tools that I mentioned are also there. Sound documentation, we support Debian packaging. You can generate Debian packages to install them on your computer and we used to make scripts for the compilation of the whole thing. Basic requirements will be like Python. We use a lot of Python. I didn't put C++ there but that's also our requirement. Robocon, you need to install Robocon to use it. You need to install it in every device that you're going to run components. So let's say right now I'm running some components here, some components on my laptop. I need to install this in both of them. You need the ICE which is the middleware and you need some kind of Linux distribution. It doesn't have to be Debian based honestly but it's easier if you do it because that's the main one that we have been using. Although right now I'm not using Debian but it will be easier if you follow the documentation. This is a need if you want to run it on the learn block. There's a library that you have to install that comes in the repository. Then you need some other tools like C++, C++, C++, libraries. So this is an example of what's the component network that we will run if we run a test here with the learn bot, right? So all these components will be running in the learn bot and then this component will be running on my computer. Then the learn bot will set up a Wi-Fi and we connect to the Wi-Fi and that's how we can get the information from the different components. We will run a component getting images from the camera, one for the servos, the servo for the camera, the base to manage the differential base and then we have a component for the display and then we have an emotional component that will send, if you call this component with the different emotion that you want to express, it will send the display, the information that you want to show on the screen. If you had some other way of expressing emotions, you could actually connect this one. If you had an arm or something, you could connect this one to the arm. So if you say, I'm happy, you can actually trigger different actions, right? Then what we have on my computer, before we used to have whatever program you want to run and then just whatever you want to do with these components, but right now with the new tool, we use the learn block and that will take care of connecting to the different components and managing the information from the client. So this is a basic example of how it looks using the blocks. This is a main program that will make the robot express joy. Now I'm going to have a look at the tool, learn block. As I said, it's pretty close to scratch. We basically have different blocks here that you can use that are classified here and you just have to put them into the main screen here and then you just put them together, make your main. You have your main block. You put whatever functions you want into the main. You can actually create your own function and you can actually then just click here and run it on your computer and on the robot. You're supposed to be able to run it in the simulator. I don't think it works right now because we're still working on it, but this is another option. So if you don't have the physical robot at some point, you can just run it on the simulator and then you can save whatever you're coding and then bring it back. I'll show a bit of an example later so we can see how it's done. So yeah, this is a bit of an overview of the blocks. These are the control blocks. There's a main, if, for, else, if, while. That's basic control. Some expressions. So you can express different kinds of feelings or you can express how it feels. You can get anger, sadness. Then to control the different motors so you can just move around. You can move right, straight, turn right. For the camera, up and down. Turn left, turn right. Go straight, slow down. Stop, shoot the somewhere. For perception, we basically, the main test that we used was following lines. So we were doing some, actually we started doing some black line following as most of the people does with learning robots, small robots. In the latest version, we actually had some problems with the shadow of the robot so we switched to a red line. We have different functions to get images, to stop the robot. If you find, you can get obstacles at different distances. You can get distances. That's actually something I can't do right now because I haven't put the laser thing. I have the mirror like that's not, I haven't had time to put it. But it is in the actual version. Probably thirsty to get the movement of the robot. Some different operations, regular operations like some, multiply or and true, false, that kind of thing, division. Some other stuff, you can actually, well you can create your own variable. If you create your own variable, you get these four blocks which are for different kinds of use. So this will be like, if you want to attach these to some, say I want this variable, I want something to be equal to this variable or I want this variable to be equal to something. If you want to put the variable in the middle of two things at some point or if you want to set it to some value. Functions. If you create your own function, you will get this kind of block and then you just set whatever you want your function to do into this function. So whatever instructions you want that to do, you put it there. And then this will be like the call to the function. So if you can put this in your main or some other function that will call that one there. And then some other stuff like print or wait to sleep there. So you can use it. So how do we use this? So this is how it looks on the previous version like the same following line kind of thing. So it might be not hard for someone that knows code but like for high school students depends on the level, right? So this is how it looks now. This is basically a following red line kind of program. So now I'm going to try to show a small demo. I was actually testing it before but since we're following red line and this is like pretty much all red, it's not being very helpful. But yeah. So basically I power up the pie and it should show up soon. And then it will set an access point so I can connect to the Raspberry Pi. I actually bought this wireless keyboard. It's pretty handy for that. Yeah, so it's booted there. So now, I don't know if you see. I can connect to the Raspberry Pi Wi-Fi. Now it's connected. And now I said I should get access. Let me make this bigger. Yeah, so now I just have to like SSH. I get access to the Raspberry Pi. So we actually have like a script that you can run and you can actually put it at the beginning of the boot up. So you don't have to run it manually. But we have it still manually because we're doing development and all that kind of stuff. And if you run this script, it will run every component here. But I'll run it manually so we can have like some actual control over each one of them. So I'm going to run first the display component, which is this one that will take control over whatever is displayed in the way. Uh-huh. Oh no, I'm not. I need to SSH to the Pi. It's the PiUser. So here I will tell it to run it. That's a local display. And now, yeah, I should be able to run the display component. And as you can see, now the display is blank. It took control over the display. I don't think this is a requirement, but I was getting some error because we used to have ultrasound, so I'll just run the component for the ultrasound even though there's no information for that. And this is the emotional component that will take care of the emotions. This is everything that I'm doing right now manually. You can actually do with the tool that we have for component management. I'm just doing this so we can actually see each component. So this is the base. This controls the differential base. And then we have the joint motor for the, let me make it this bigger, for the motor of the camera, the servo. And then we have the camera. And actually, I have to be honest here, I actually lied here a bit because we are not actually using right now a component for the camera. It's not a Robocon component because we found out somehow the component that the student that was doing this found out that the actual component that we were using was giving us frames per second. We actually figured out that using this MGPJ streamer was gonna be faster. So actually right now we're just using this that basically streams the image of the camera over, I think it's over TCP. And you can actually, this is, yeah, I need to run this. Something wrong with the camera. Video in fail. Didn't we? We didn't, right? Yeah, that's fine. That was it. So now we loaded the module. You can actually see here that the streaming of the camera, if you just go to the, yeah, this is how that library works. So you can basically get the stream of the camera and grab it to TCP. And now I'm gonna run the the learn block tool, right? So this is how it looks. It's in Spanish. I'll switch to English. There you go. So let's say I wanna show, I'm gonna build a program that we're gonna make a while true. Where is it? Operators true. That's a while true. And I'm gonna make my robot enjoy. You get that, right? Then if you actually, I think if you actually have a loop, you should get like, this is, yeah, this component is saying that it's getting the express joy action. Then you can change it and be like, okay, I don't want this one anymore. I want it to be discussed. Yeah, you can change whatever. Right? Surprise. Yeah. So now I'm gonna try to build this red following application. Probably it won't work properly because there's too much red all around. But let's see how it works. I'm gonna still make a while true here. And now I'm gonna attach this if in the while. And I'm gonna do... There's the red line in the center. I'm gonna go straight. I'm gonna go straight. And I'm gonna add something here. Maybe I'm gonna make my robot happy. And if not, if there's a line that goes away. Here. If there's a left red line, I'm gonna move to the left. There you go. Moving left. Now I'm gonna make it surprise. And then I'm gonna add another one for the right use case right here. There's a right red line. I will move it right. And I'm gonna make my robot scared. Now if I finally... If I don't see nothing, I'm gonna make my robot scared. Do we have scared? Oh we have this one already. Let me just put something else. Angry. It's gonna be angry. And I'm gonna move the robot. I'm gonna move the robot. I'm gonna just slow it down. Right? Yeah. So this is how it looks. Right here. Now I'm gonna try to run it. It's probably gonna get a bit crazy. I put some tape here, but this is all red. The camera gets a little red. As I was trying to dust it. But yeah. If I run the start here on the physical robot, then... Wait, something happens. You can see the display. Ultra sounds. Joy. Okay. Something didn't connect. So I'm just gonna stop it and re-exit here. And rerun. I'm gonna load it. Put it in English. And then if I start... Yeah. I think it's moving, but it's not. Yeah. For some reason it's not seeing. Whatever. Yeah. It gets stuck here. Yeah. So that's actually... It's a bit plain, because I actually... I was supposed to buy these... Pololu... bulk casters, but I couldn't, so I actually just printed. But you can see, if I put it on the red line, on one side, then it will move one wheel. Then if I put it on the other one. Like in the middle, it will move two wheels. And then if I move it on one side, on the other one, it will actually... Huh? No expression. Oh, yeah. Because this one doesn't have an expression. I actually... Yeah. If I... put like some expression here, I don't know. It should be... changing at some point. I don't know if it's detecting... No, there's no detecting. Central line. So that's it. Now it's happy. Because it saw the... the line there. Yeah. So... I'm actually... gonna show you... Wait. Yeah, I'm just gonna stop it. I'm gonna run this. So yeah, that's more or less how you... how you run the... the... how you manage this robot. And... we actually have the... I'm gonna show you. It's probably not working, but... we also have... where is it? The simulator? Here. This is the regular simulator we use for every robot. And this... cannot open that. I don't know why it's not there then. The... simulator. But yeah, if... you run it... I did run it before. Anyway, I'll try to... run it later if you want. You can come... and I can... show you... how it works. All this one is not there. What is here? Learn bot. I can probably load this world. But it's on this world. Oh, this is a robot. So... this is basically the... small version of the Learn bot. It's all white. So you can't see much. Let me see... what else do we have here? So here, there's probably... yeah, this is a world. Like an empty world with the Learn bot here. And an obstacle. This is the 3D version of the Learn bot. And this is the box. So you can run whatever... you want to run on it. This actually... offers the same interfaces... as the components that we have here. So you can just use whatever you... your program does... either on this or in the real robot. So it's actually... easy to understand. Yeah, I have... like 5 minutes more. So I'll finish... Yeah, I'm going to go through... this is like the previous to-do list... that I showed on my last presentation here. We wanted to do some... Scratch to Python kind of thing. So it will be easier for students. But we actually did some work on this line. There's a tool there. I don't know how stable that is... because I haven't tried it. But you can give it a try if you want. We actually built the Learn block... which is pretty easy to use. And we think it's... a good thing for students. We still added some more sensors. We have the lasers. And... we wanted to integrate PrimeSense... but because of the cost... we haven't done it. And I don't think we will... because we can do whatever... it's done with it. With the lasers. Most of it. Design. It looks pretty cool right now. We're still working on it. We're not happy with the current design. So we might want to change it again. Even though we already changed it. People is welcome to do their own designs... and then submit it. We can have different designs. That's actually pretty cool. It's really printed, right? So we can have different designs... and then you can just build your own... whatever way you want. Easy to plug sensor. That's something we always wanted. I actually think this will be really cool. Some way to have... plug-and-play different sensors... different modules. Like I want to use a laser for this demo... and then I want to use a different... different sensors for these other ones. And we want to build new applications... because we always do the follow-align thing. We added some obstacles now. You can run and check obstacles... and play with that. But it's always cool to have new applications. So we have still to do a lot of things. There's a lot of bug fixing... and degradation to be done. To be honest, it's not the best in the world. So there's a lot of work there. We want to use... to add more sensors. Maybe, at the prime sense, we don't know about that. A more efficient external design. This one, it's a bit... it's not very good for the camera. The camera is here. So we had some problems with the shadow... of the case of the robot. Make them easy to plug, as I said. And have new applications. We're actually... one of the ideas that we have now... is to do some emotion recognition with the robot... and some answers the robot can answer... to your emotion recognition things. They're doing some research on that. So we're actually on the summer of code this year. We've been on the summer of code for... I think like... five years. There's a lot of ideas... that will go for the... for the LAN board. So if anyone wants is interested... just give it a try, have a look. And yeah, that's pretty much it. Thank you very much. If you guys have any questions, come here. What is the Robocon content with ROS? I would say... the main feature that Robocon has... that ROS doesn't have... is the domain specific languages that we have. So... I've used ROS a couple of times. But I'm pretty sure that you have to write down everything... like when you write a component... you have to write down everything from scratch. So what Robocon does is... like you write... we have different domain specific languages... but for the component design we have one. And then you actually write like a recipe... for the structure. And that gets converted either into C++ or Python. So you don't have to actually build the whole thing... from scratch. You only have to like fill in whatever functions... your interface is going to offer... to the other components. That's the main feature. There's different stuff like... ROS is more topic oriented... like public and subscription... even though you can do like some functions calling... and we actually have both things too... but we usually use more... we're just calling functions. So you have to... if you want to get like camera... you have to do a polling on the camera... and get the image every time. So it's not like the topic. And then there's advantages... and disadvantages to both... of the ways of doing things. But yeah... well you get more stuff with ROS... that's true. Yeah, more popular, right? But yeah. We actually have ROS support. You can connect components... from Robocom with ROS. That's actually... I think that's actually one of the ideas... because we have it... but it's not integrated into the domain-specific language. So one of the ideas this year... is to integrate that into the domain-specific language. So if you specify that you want that component... to connect to a ROS component... for some reason... then it will generate whatever code you need... to connect there. So you can actually just access the data. Yep. You just initialize all the components... on the rest of the file... and just how you run the main... is actually from your PC, right? Yeah, well the main component... like whatever it's doing... it's just running here. Yeah. The picture here... the slide that I showed... with the component network... Wait, where is it? Can I skip it? Yeah. So this is the main one. Oh yeah, but all the subcomponents are in... Yeah. All these are running on the Pi. That's why when I was doing... like going from one shell to the other... it was basically running these... different components. The connection is to Wi-Fi. Yeah, this is all Wi-Fi. Yeah. But I can also put my main in there, right? You could do it... but the good thing about components... is that you don't have to do it, right? Because let's say you... we don't have any... in the learn block thing... we don't have this recognition... maybe you don't want to run the... algorithm in the Pi, right? Or like let's say you want to do... like some deep neural network, right? The autonomous version... and I can still... I don't have PC. I just... But I cannot do complicated stuff. Right? Yeah. Well... I don't think... robotics are supposed to do it that way. You have wireless communication. Why would you want to do it? No, in school... in school you'll show you what we practice is... but in some situation... we have to upload everything... at 31 on school. Oh, because you're... you're doing some contests or anything, right? Yeah, yeah. Oh, I see what you mean. So they... so you lose the connection for the... Autonomous independent... I see what you mean. Yeah, yeah, you can do it. You can do it anyways, but... it is... it's probably... so they assure that you're not cheating... or something, right? No, remote controlling or something. Yeah. Yeah, I went to some contests... robotic contests that what they do is that... you actually press something on the computer... they let you use a computer... you press something on the computer... and then you just... like you press start... and then you have to let it run. Okay. Thank you. It's actually pretty cheap. It's like... I would say... it's less than a hundred things. But I mean... what's the pie? It's like 25. Oh, really? Oh, okay, I'm... I'm talking USD then. Should be... because I bought it in euros actually. It's including the pie. The camera of the pie is actually expensive. I mean, it's like... five or something USD. So you can actually buy like a cheaper one. We actually did that with the previous one. And then you just need the power bank... the... the display is like... $20 more. So... Yeah, $150, $200, something like that. Yeah. Then if you throw... this is like a $10 thing in Aliexpress. This is actually pretty handy if you... you want to manage... if you mess it up... and the Wi-Fi is not working... you can go into the tiny screen and do like some fixing. Yeah.