 So, hello everyone, please take a seat. My name is Marco, I'm a Ph.D. student from the robotics lab in the University of Extremadura in Spain and I'll be talking about the LERNBOT which is an open source robot meant for education, this guy right here. I'll basically talk about the purpose of building a robot like this, what's the design, what's the software that runs in it, and what's the software that we run out of it in our computer to use the robot. So why would we build a small robot for learning? We wanted to build an open robotics platform that can be used by students. The purpose of this is to help them reinforce concepts, learn how to program, and even have fun and learn making new stuff. For this, there was a few challenges we had to face. Basically it had to be easy to create and use because we want people to be able to create their own robot. And of course there has to be easy access to parts and actually they have to be kind of cheap. We didn't want robotics, everything that you get into robotics gets quite expensive. So we looked for specific parts that are cheap and accessible to everybody. I'll get into the details on the parts that makes this robot work. First, the brain of this robot is we use these audroids, which are basically small computers. It's like a Raspberry Pi, a Korean Raspberry Pi, a bit more of power than a regular Raspberry Pi. It's got like one point gigahertz arm processor, one gigabyte of RAM, internet, GPIOs that we use and USBs that we use for the sensors connections to manage the sensors. So basically what we do here is we install an overriding system, we use Debian, and then we install a robotics framework. So this is like the main core that will handle the robot. Then on the bottom of the robot, if you look at it, there you go, there's the differential base where we have, for those of you that are not familiar with robotics, it's a base in which you have two wheels. And then if you want to go forward, just move in a positive, the two wheels at the same time. You want to go backwards, negative. If you want to turn, you move one positive, one negative, it's like one forward, one backward. And the other way, if you want to turn the other way. And the parts on this differential base are just, we bought one of those do-it-yourself motor wheel sets that you can find easily on Aliexpress or Dell Xtreme, one of those Chinese websites. We use the Polulu driver to connect that to the, it's meant for Raspberry Pi, but it works perfectly with the Android. It's meant for connecting the motors to the Android. And then we have these bulk clusters, which are like three wheels that helps stabilize the base and help it, it allows it to move left and right. For the same source, we have camera, small camera. We looked into the cameras and we found out that just buying a regular webcam was the cheapest way to get one. So we bought a webcam, USB webcam that you can find in any Chinese website and for a very low price. We got the plastic out and then mounted on the robot. Then we had some ultrasonic sensors that are actually very cheap. You can find them. You have them here. You can have a look at them later if you want. I brought the parts so you can have a look. And then we also have the prime sensors. It's a Kinect, but very small one. We use that in robotics. It's not on this version of the robot, but there's another version that we did with this sensor. And we have the components perfectly running on the Android, so you can use it. It's meant for, you can get image, but you can also get depth. It's the same device as the Kinect, but it's a smaller one. That increases a lot the price because those guys are like 200 or something like that. So the basic version just comes with a camera, of course. Here you have the three versions that we did. This is the first version that we did with the prime sensors here. And second version, messy wiring. It was very messy and very hard to debug. The whole wiring kept getting disconnected, stuff not working. So we built these PCBs that help a lot and keep it clean. So we just place it inside and then just do this ordering on that. And then it gets that clean aspect. The outside, the outside is fully printed, 3D printed. We have the models. We have them on the website, but if not, just drop me an email. I'll send you the models if you want. So we encourage everybody that has some designing skills, which we don't have much, actually, in the lab to improve the design with new cases if you want. And then this is all just powered by one of these regular batteries, cell phone batteries connected to the Android. So it's all pretty simple. And this is how it's all connected. You get the wheels here on the base, connected to the driver, connected to the main Android powered by the power bank. And these sensors are connected to the camera, to the USB. That's basically on the hardware side. On the software, running inside, it's Robocom. That's our robotics framework. We use it in all these robots. There's tons of robots that's even more than are not here, but all these robots, they run Robocom and they reuse the components that we use in each of them. So if this guy has a differential base, we'll run the same component that this guy and maybe this guy does also have a differential base, this guy. So we can reuse the code. So what's actually Robocom? It's a robotics framework. It uses component-oriented programming. That's the main kind of programming we use in robotics. So how that works is that we will run components for each of the specific needs that the robot has. So let's say that we have a camera and we want to get the image from the camera, right? And then we do some processing on it. So we'll have a component that is grabbing the image from the camera, and then we have another component that's doing the processing. So if we look up, I'll just draw something here. So this will be the camera component, right? And then they will have the processing image. And this will access the hardware at the actual camera. And then this component will talk to this other component through an interface, and then it will ask through an RPC for the image. And then this guy will send the image to the image processing component, and then it will do the processing. Why we do this? We can reuse this component for every single robot that has a camera. And then we can also reuse this processing image component in every robot that we need to process image. On top of that, we can run this camera on this guy, and the heavy load, we can run it on our computer, which has more power than one of these guys. So that's why we use this component-oriented programming. For the communications, the ICE middleware is used within the robotic framework. We have domain-specific language-based tools to manage these components. So you can generate the generic parts. So you only have to actually write down the specific part of your component. We also have some other tools for robotics, like simulations, testing of some components like checking if the camera is working, stuff like that. Recording and replaying behavior, that's also very important to record a component's behavior for future testing. This is actually the work that I was recently doing with the robot. I was trying to get this sprinkle bottle from the table, and this is how it looks with the network of components running. So all those components are needed. So I can ask the robot to get me the bottle from the table. So it gets pretty complex, but it's kind of simple here, as you will see. This is actually the layout of the framework. We have components. We have the interfaces that will help us connect the components. We have a few files for the models, deep learning models. We have some other images. We offer 3D models for the simulations. We have classes and libraries, which is a reusable code that you can use with the components for maths and some other stuff. The tools that will help us develop and test, especially test, there's a lot of testing in robotics documentation, there's devian packaging also to help us install this framework and the CMake for compiling. So this is how it looks on this guy. So this is what is running inside the learn bot. We have a component for the camera. We have a component for the base, differential base to move it around, and then we have an ultrasonic sensor, another component. And then there's the library outside, because we tried to make it easier to use, so it's just, then in the end you will only write a Python code. So you just run a script, import the library, and then everything is taken care of by Robocom. So these libraries will take care of the communications with the different components that are running inside the learn bot. Of course, this runs inside the learn bot, and then you run the script and the library on your own computer, how they communicate is by Wi-Fi. When you turn it on, a Wi-Fi will appear, you just connect to the Wi-Fi and communication zone. Okay, so I'm going to try to explain how to use this library a bit. We need to have Python on the computer. We need to have Robocom installed, mainly because we need interfaces with ICE. We also need ICE installed. We can run it on regular Linux computers. We use Debian-based computers, basically, but you can actually, there's people running it on some other different distributions. We need the client library, which we all use to interact with the components that are running inside of the robot. I'm going to explain, this is not the whole code, it's just a bit of it, but you can get an idea of how we build this follow the line, typical follow the line application. Basically, you import the library, which is not here, but you import the library in Python, then you basically inherit from the library, and you define this function, and then you start in a loop, you start getting the frame. This is how you basically get the frame. You do this, and then you already have the image here. Then we use some OpenCV to convert the image to ... OpenCV is an image processing library for those who don't know. It's widely used in robotics. We use it to convert the image to binary, so it's only black and white. Once we have the black and white image, we divide it into three rectangles in the bottom part, three rectangles, and then we basically sum up the black pixels on each of the rectangles. Once we have that, we will just follow whoever has more black on it. If it's on the first one, we'll go left, if it's on the center, we'll go straight, and then on the right, we'll just turn right. Here's the CV code, or part of the CV code to get rectangles, and then sum up the whole black on those bottom rectangles. We also get the sonars. Just in case, that's how you get the sonar. Also, it's pretty straightforward, and then we check if there's a distance to one of them, so it won't smash into a wall, and then just stop it when the distance is under a certain threshold. This is how you move the robot. It's actually stopping the robot, because this is the speed and rotation, so this is how much you want to rotate, and then how the speed are going straight. This will stop the robot, and this is how you move it. Once we had the numbers from the previous sum up, if the more black area is the first one, this is the inner turn, and this is going straight, and this is doing another turn. That's basically the code. I'm going to try to show a bit. I hope it works, because I was thinking of splitting up, so I'll try to show. I'm connecting to the Wi-Fi that the Learnbot brings up, hope it works, maybe still putting. We built this smart device that allows the robot to look down, because we have no motor on the camera, so it's only looking straight. We use this for the follow the line demo. Didn't connect to the Learnbot. Let me try again. All right. It connected. Go to the code. I'm actually getting into the... We manually run right now the components inside, so there's a script for that, but we plan to do it in the future automatically. It's still in a very early stage, so we're actually... There you go. Mm-hmm. And now... Where's this? Yeah. So, it's doing the follow in the line thing, so you can see. Here's the binary image, and black. There's a bit of shallow that gets a lot of black. It sometimes gets confused by that. But yeah, you can see the image up there, and squares that we built on the bottom, and I will also... I don't know if you can see the... No, you cannot see them. But yeah, I can try, actually. Wait. I can actually show you the... We have a panel to control the devices. Mm-hmm. Mm-hmm. Let's play what. We built a component that allows you to control the different parts. Yeah, so this is a basic panel, then you can see the... The sonars are not working properly. They... Well, yeah, there you go. Because they have a bit of like, this is the ultrasound sensor. This is the camera. You can move this, you can see. And then you can move it with this thing. Allow. Yeah. Yeah, so that's it. Sorry. I actually don't know. It's just a cheap camera, a webcam, yeah. The whole thing, it's quite cheap, actually. This is the most expensive thing will be this. It'll be like, I think it's like 50 USD, something like that. Maybe this thing, it's also the most expensive thing. This is a few dollars, also a few dollars. The printing case, I don't know how much the stuff that you need to print is. But yeah, it's actually pretty cheap to build. And yeah, I'll go back to the, oh, it's here. And finally, we have still a lot of stuff to do. We wanted, we started doing some scratch to Python conversion. So we can use scratch on this thing for the kids. We tested already with some young people, but they were like almost university students. But we want to build some easier interfaces. We want to add more sensor support, maybe integrate the print sense in this version, like maybe just having another top case that you can just easily change or something. So that's where the new, more efficient external design comes to plug and play with the different sensors and then also build new applications, cool stuff that you can do a lot of stuff with the camera, just face recognition and it's very easy, OpenCV makes it easy. So we're on Google Summer of Code, actually with a Robocom project and this other project which is actually a child, was one of the libraries that we had in Robocom, but it's now its own project, it's a computer vision library. And we have a lot of ideas, if you guys want to participate in Google Summer of Code, please go to the websites or drop me an email and apply. It is cool, you can actually, there's an idea for the LAN bot, so you can actually build your own robot, have fun with it and then even get a few thousand dollars, which is not bad. So yeah, please have a look at the websites, we have the ideas there. If you have any other ideas, just feel free to drop me an email. Thank you very much. Any questions? You have all the information, it's here, you can download the code, examples are there. We do sell it, you can drop me an email if you want, but it was meant so you can do it by yourself. It's a very early stage version, the code, that's why we still have the Google Summer of Code for developing. We want to improve especially stability on the code for future versions, so you can sell it properly, right? You decide what other functions, can you build like a vacuum or whichever? You can do that, but the more stuff you want to add to it, it increases the price, right? So basically just with these few sensors, you can do a lot of stuff that will be quite challenging for students, so I think it's a pretty decent, that's actually why we removed the Prime Sense sensor, because it was too expensive. It's actually double the price of the whole robot, once you get into, if you want to build an arm for it, then complexity also gets high, right? If you start dealing with an robotic arm or something like that, yeah, it gets pretty expensive and complicated. You can do whatever you want, once you get an image, you can just run, actually you run it, because we use components, right, so you can run it on your own computer, on a, you know, server, you can use GPU processing or whatever you want. No, it doesn't. There's very few robotics running on Windows, it's pretty much everything is free software. Yeah, yeah, I'll try it. I didn't make that decision, so, but I think it's mostly on the power, because we were looking for something cheap, but we need power, especially for the Prime Sense, because when the library for these Prime Sense devices is open-need and it consumes a lot of CPU power. So we actually, when we run the Prime Sense, it will get the whole core for itself, and sometimes it can freeze the robot. So that, I think that's the main decision. So what's kind of like the future, it's like the game of the project? Well, we plan to make it more stable, and once we add scratch, we have some local schools that are interested. It's kind of an open version of the, it'll be like, let's say it's an open version of the Lego Mindstorm. So you don't have to spend so much money on the Lego realities and all that stuff that you're paying. And you can just even, if you don't want to build it, you can buy it from us, but you can also print it at home or build it yourself. So yeah. So it's actually quite good for let's say developing countries that want to do it. So it's kept at a very low price, right? How is the community so far? Well, the Robocon community is getting big because we had recently, we're on the third, fourth year of Google Summer of Code. So that's helping a lot for us. But we currently have quite a few universities in Spain that are stable in contributing. And there's a few other students that are around Europe also contributing to the project. So it's not very big though, but get in there. It's enough. The big challenge with robotics is that in the end, you need the robotics hardware. So this is a means to have a robotics hardware that is not extremely expensive. Because once you got to the big robots, like that one on the picture was like, let's say, I don't know, like $3,000 or something like that. $300,000. So, yeah. Actually, the general was sharing some of the devices that we have now for Altrae. Altrae, Altrae. Yeah. Can you actually use Raspberry Pi as well? You can probably use, yeah. You can probably use Raspberry Pi or I think there's a new version of this one. Not the C1. You can use some of the... As long as you get the GPIOs and then you find a driver that it's compatible and the USBs, you're good to go. And you can power by one of these guys. So... Sure, sure. Yeah. All right. Thank you.