 So, this talk will be on the experimental robot project and we had some problems, but here are the slides. So I'm going to start with a little introduction and tell you why this project might eventually become interesting and why we are doing it. And then the talk is separated into two main parts. In the first part I am going to talk about walking in simulation, that is we take a physics engine, build a model of the robot and try to get it to walk in the simplified physics with few simplifying assumptions. And then in the second part of the talk Felix will tell you about how we might eventually move from simulation to reality. So this project is about building a life-size humanoid robot and for the next years at least we are going to focus on the legs. So arms might come eventually, but only much later. There are as was mentioned already quite a few projects that are already working on this, but none of them are really fully open. So we try to make a fully free project which is both open source software and open hardware. Also we try to completely document the development process. The goal will be to have on state-of-the-art software as far as that's possible because software of course is easy to copy and for the hardware we try to focus on something that is manufacturable with moderate resources. Well why would one want to have a humanoid robot at all? Wheels are of course in dedicated environments so if you are on the street you are not going to beat a wheel to a robot no matter what you do. On the other hand human environments such as this room are really limited if you are on wheels. So for example there are stairs over there, there are stairs you couldn't get on the stage without climbing stairs so there are many reasons why wheels might not be so ideal. If service robots eventually do become commonplace these environments may change but then at least if everything goes wrong and there is a disaster you really need to have some alternative to wheels. But of course for us the real reason is that we saw these videos from Boston Dynamics and before we all saw them and thought that's really cool it's unfortunate that the military is doing it. Let's try to do something well similar ourselves. Well yeah it seems that the progress on humanoid robots after several decades is finally heating up. On the other hand the big company players like Boston Dynamics and Shaft which are now both owned by Google are really secretive. So they publish YouTube videos that make you that are really exciting but don't really tell you anything and there seem to be no scientific papers or anything. There are also university projects that do publish papers but still they don't actually publish usually source code, they don't publish CAD drawings and they're not really intended for you to copy their hardware. They are scientists and they write papers and that's about it. And the other problem is of course that existing robots cost on the order of several hundred thousand dollars which is completely unaffordable for a hobbyist. Now a dream at the moment is to get this down to maybe a few thousand euros which is still a lot but possibly affordable for a small group of people or hackerspace or so. I would also like to mention that this first part of the talk where we do a physics based character animation that's actually a big topic in computer animation that they don't seem to focus on putting these areas on real robots. They use physics as a tool to get realistic animation. So what are we going to do in the first talk? The idea is to take a simplified physics model and try to simulate a robot and then to develop a controller that will keep it from falling down. The idea is despite actually working on the controller to actually inform design choices for the later project. So the simulation will ideally tell us what kind of motors we need, how fast they have to be, what kind of years we need, how well we can cope with sensor uncertainty and so on. We are going to use a dedicated dynamics toolkit that we wrote ourselves from algorithms that are published but we also use the open dynamics engine that was already briefly mentioned in the last talk just to make sure that we don't exploit any implementation bugs in our own engine to our advantage. So we have an external engine that kind of lets us know that this works without, you know, these assumptions we might unconsciously make. So how do we simulate a robot at this point? And this is really the basics of all the game physics engines that you may know, such as ODE or Bullet and so on. And the central concept here is that of a rigid body, which is a physical body that is assumed to be completely undeformable. So it can't bend, it can't flex. And as a result of this, the mass distribution is completely condensed into 10 parameters. And this thing has six degrees of freedom. And except for the 10 parameters, this simulation engine doesn't really care about how the mass is distributed. There's only matters for collision detection, but not for actual dynamics. The next step up in realism would be the so-called soft body, shown here a picture from Wikipedia, probably the most common place example of a flexing beam. Then suddenly you need the complete details of mass distribution and so on, which essentially means you have to make a cat drawing, you completely have to design your robot, you have to decide what materials to use and so on. And even then, this thing has infinitely many degrees of freedom, which still become a lot after discretization. So this is numerically much more expensive to simulate. The method would be called the so-called finite element method. But we can't, and we don't want to do this at the moment, so our robot is simulated as a collection of rigid bodies. This is the model we use at the moment. It consists of six rigid bodies, the torso, which is just one piece, and two legs. We have six degrees of freedom per leg, three in the hip, one in the knee, and two in the ankle, four six degrees of freedom total per leg. The advantage of this is that the rigid body, that is the foot, also has six degrees of freedom. So in this configuration we can essentially control the position and the orientation of both feet, but not more at the moment, just so this is sufficient for what we want to do. Now let's come to controller design. Why is this even a hard problem? I mean, if you've seen 3D animated films, you might think that this is really easy, but at least reasonably easy. Here's an industrial robot and a biped, and there's really a lot of work on industrial robots. The last talk was on this. What's the main difference? Well, the main difference, as you see here, is that the industrial robot is bolted to the ground with rather large bolts, and this is obviously impossible for a biped because we can't screw its foot on the ground every time it wants to take a step. That's impossible. And this is what makes this problem hard. The thing is that for the industrial robot, you control all degrees of freedom. This is called fully actuated. And this means that any trajectory can be followed. This, of course, is not completely true for industrial robots. You then start to worry about things like collision avoidance and so on. But if the problem were as simple as for a walking robot, then collision avoidance is usually not a big problem if you're on even ground. So this would be trivial, or this would seem to be trivial. But it's actually made complicated by the fact that you have the stance foot which is not fixed to the ground. And therefore, the intrinsic dynamics begin to matter. It actually matters how fast you execute a trajectory and so on. And you really can't take any trajectory any longer. And I'm going to show you a little demo for this, which is, I think, here. So this is what we might like to get. Here's a model of a robot with no physics engine. And we might want to get this. It just walks. I mean, this doesn't look particularly well, but it seems OK. So now if we take this and connect an actual physics engine, then we see something disappointing, namely that it fails. It just falls over. The thing is the trajectory tracking actually works. So you can see that the joint angles in the robot, they actually more or less track the trajectories they had in the last simulation. But the big problem is that the foot, which we cannot control, just loses contact with the ground. And this is exactly the kind of problem we need to worry about if we want to design a controller for a walking robot. See, it just falls over. So let's do a little bit of physics here and talk about contact forces. Now contact is in principle a complicated microscopic phenomenon, but there's one thing that's actually quite simple, namely that contacts are usually non-sticky. And I illustrate this, this cartoon of picking up a box. If I place a box or something, I don't have a box, but this pullover will do, if I place it on the table and it's actually pulled towards the center of the Earth by gravity and the reaction force from the table just compensates this force and the resulting force is zero, so by Newton it doesn't move. If I try to lift it up, I don't quite succeed, then, well, the lifting force compensates a bit of gravity and the rest is compensated by the reaction force or contact force and the resulting force is still zero. So this is a so-called constrained force. It just takes the value it needs to stop this from accelerating. But finally, if I pull upward with a large lifting force, then the contact force would have to be negative in this picture. This would happen if it were glued to the table, but if it's just a normal contact then it can't and it would just accelerate upwards and I pick it up. This is really the central concept that happened to the poor robot in the last simulation. Now let's consider multiple contact points. Of course, this is, again, a cartoon and in reality you could have very, very many contact points, but let's suppose we have four and this is supposed to be a cartoon of the foot. Now, at each contact point there is a contact force acting and we can now define the so-called center of pressure as the weighted average of all the contact points and we weight them just by their contribution to the normal component of the contact force. So if x2, if all the contact force comes from x2 then x2 then hc is going to move closer to x2 and so on. We can write this with these weighting factors alpha and then the fact that these fn's are larger than zero implies that the alphas have to lie between zero and one. Mathematically, this is called a convex sum, but even if you don't know what that is, it's really quite obvious that if we take weighted average of these four points with weighting factors between zero and one, then the average can't move outside of the box. That's really all that there is to it, at least for the case of a box. Now this seems to depend on the microscopic details and that would make it completely useless because we can't hope to model these. We don't actually know the microscopic structure of the ground. But there's a magic trick in a way. That is if you sum all the contact forces into a total contact force and a total contact torque and there's this equation shown on this slide that will let us calculate the center of pressure just from the total contact force and the total contact torque. This is really nice because we can now look at walking and usually this stands for the stationary. So in this case, the contact forces have to compensate the reaction forces from the robot body. And then we can just calculate the contact forces, the reaction forces from the robot body and get necessary conditions so that the stand foot has a chance at remaining stationary. And these conditions are, well, first of all, the center of pressure has to remain inside the foot and second of all, well, it can't just pull up its stand foot like so. So the total normal component has to be positive with my choice of sign. Turns out that these conditions are sufficient for no slip constraints. So if you can't actually slip on the ground like this, then these are even sufficient. And usually this is sufficient in practice. If you fail, you fail in the way you saw in the last demo. Now we can actually say quantitatively why the last demo fails because this is the center of pressure that would be required. And you see if this kind of walking would require feet that are huge, almost 70 centimeters. This is not only unrealistic that this can't work. They would collide. And so this lets you know what really happens, of course, the center of pressure goes to the dotted line and then stays there while the foot starts to rotate. So this can't work. So our new control strategy should focus on contact forces. Here's an idea. Imagine the robot were floating in space and you know from school that total linear and angular momentum are conserved and this implies that your center of gravity trajectory can't be influenced. If you're an astronaut in space and you're floating away from your space station, then there's nothing that you can do about it because the center of gravity or center of mass in this case is moving away and no matter what you do with your arms you can't change them. On ground, fortunately you can do something about it but you can only do something about it through contact forces. So there's obviously a relation, or not obviously, but there's a relationship between linear angular momentum change on the one hand and the contact forces transmitted through the foot or the feet on the other side. So we make a simplifying restriction. Maybe we assume that the total angular momentum of the robot is just zero. This simplifies the equations and then we're just left with the center of mass trajectory and we can completely determine the contact forces from the center of mass trajectory. In particular, joint angle trajectories only matter insofar as they influence the center of gravity trajectory. We still have six contact forces, that is six forces, three forces and three torques and we have six equations, namely that the angular momentum must not change, we have the center of pressure and we have the height of the center of mass. Then we can solve a so-called boundary value problem to find the center of mass trajectory. This idea is from a PhD, this is from T.O. Munich. This is illustrated here. So we assume that the center of mass is just a constant height. Now we specify a center of pressure, this is the blue line. We specify this so that it changes inside a legal region. And then solving the boundary value problem gives us a green curve and if we track this center of mass or center of gravity, then it will generate these contact forces which are as we chose them to be legal. Now there's one slight problem though because we would usually have three boundary conditions but only a second order differential equation because we are already walking so we have the center of mass position and velocity at the beginning and we would also like to specify the center of mass position at the end so that it doesn't go completely off. And we fix this so we get the third degree of freedom but just allowing our chosen center of pressure trajectory to be slightly modified. The problem now is that the center of pressure constrained by then become violated. What this is really saying is that sometimes you need a side step. So if you're walking and someone pushes you, you can't just keep placing your feet like you want them to. You have to take a side step. And this is really what this mathematics is trying to tell you here. However, usually, so if you're not pushed for example, this approach works quite well. Now in the last step, we just take our full robot with its 12 joint spacing user freedom and use so-called inverse dynamics to just control the contact forces and by this track the center of gravity. Now two cases, just for completeness, the next slide will be a demo, don't worry. So if there's only one leg on the ground then we have six contact forces, again six forces, three torques. And we also have the swing leg, which is on the air and it has six generalized accelerations. Again, three translational, three rotational. In each case, well, if you have two legs on the ground then you have two contact forces, one for each leg. In each case, we have 12 equations for 12 joint space degrees of freedom. Now this approach is actually implemented here. And I just know you to slightly play with my computer. Sorry, it's the first demo. And this is now actually a physics simulation with contact enabled. So this as far as the physics engine is concerned is physically plausible. There are still some assumptions but at least the contact is actually respected at work. Torques on the order of 100 Newton meters which is really a lot. And this is going to be the main problem really in the realization talk. But we also need quite a lot of power so we need on the order of 200 watts per joint. Well, yeah, this will be a topic in the second part. So let's just summarize what we've just seen. We've tried to design a control strategy based on contact force management. The performance is quite okay. I mean, there are better results in simulation but it's a start. One thing is that the foot positions for this controller actually fix in advance. So this could be used by a higher level controller to make it walk upstairs for example but it also limits the options we have for push recovery. The idea to take a smart sidestep is just not implemented in this kind of controller. Also the simplifying restriction that the total angular momentum has to be zero causes this weird torso motion that you've seen in the video. One could try to fix this by adding arms and that would make this stereotypical walking style to keep the angular momentum zero. But at the moment it doesn't do so much harm so we could just leave it like this. So regarding this last inverse dynamics part we could of course just take the physics engine and treat it as a black box and then use black box function inversion to get our inverse dynamics but it's highly inefficient and also numerically it's a nightmare. So what we do is we take this nice very nice book and look up some dynamics algorithms that are actually intended for use in robotics and we implement them in a small library that will be released really soon now together with the controller so then everyone can play with it. Also one nice feature of the legs of our robot is that there are analytical inverse kinematics so you can actually calculate without using iterative methods for joint angles you need to position the foot in a certain way. This is just a nice feature that somewhat simplifies the controller. Now in the longer term these kind of handcrafted controllers that we had are okay for walking but if you push really hard or if the terrain gets really rough or if you want to do acrobatics or whatever then it's going to, then these kind of simplifying assumptions eventually break down and there is actually a large body of result where people just use large-scale simulation. So basically you tell the row, you tell the computer this is your starting state this is your goal state, get me there. And this uses, this works offline quite well it doesn't work so well if you have real-time constraints because I mean if you push you can actually stop to think of course. Still there are many interesting results and eventually we are going to look into that as well but as a goal for now is to just walk on even ground. Well, it's probably easier to stay with these handcrafted approaches not least because they're easier to debug. You can look at it and check if your assumptions are still valid and if everything works. So that's it for my side. I summarized this crowded slide again and these are now the requirements that Felix will have to deal with. Yeah. Yeah, as Norbert showed you the simulation gives us requirements for torque. So our peak torque is in the order of 100 Newton meters which is quite large. It gives us velocity requirements. So about 20 radians per second and a power requirement, 250 watts. So if you do the math that won't match that is because the peak velocity and the peak torque won't show up at the same time. So on the motor side, the mainstream option there is brushless motors. Because we cannot afford industrial grade motors we go to our favorite RC side, sheep's shiny stuff and found this two kilowatts brushless motor for $30. It is quite heavy but manageable, five hundred grams. Bit overpowered but slow. 270 kV, kV stands for basically how fast the motor would turn in idle per volt. If you do the math again, 850 radians at 30 volts and approximately three Newton meter at 90 amps. So that gives us a gear ratio of 1 to 50. We get to the gears later on. Because if you have a brushless motor from the RC environment you usually power your plane with it. So you can control direction and maybe speed with the usual affordable brushes controllers but not position and we want position control here. So we implemented our own brushless controller which is encoder based. So there is a sensor on the rotation axis and it uses a thing called space vector modulation which is like the slide says basically a three phase AC face locked to the motor rotation. Think of it like microstapping for stepper motors. And as we are switching high currents there, 90 amps we are using IS 485 because yeah, it's differential. The power stage is actually from a sheep Chinese brushless controller. They were kind enough to put headers on it, two millimeter pitch but headers and you can just unplug it from the sheep as I laps microcontroller and use a SDM 32 and build a nice new base board for it. And if it ever breaks or burns just pay 20 bucks, get a new one and hope they don't change the design. So as we're trying to control the torque and the torque is proportional to the current, we need to measure the current of all phases or at least two phases because you can calculate the current for the third phase. And so we use 150 amps Hall effect sensors. It's a nice model from Allegro Microsystems ACS 759. If you ever need current sensors use them and this graph shows how fast the position tracking is actually working and it basically shows on the Y axis the position in the sensor units and on the X axis it's time and SEC tracking is quite nice. Blue line is the real position, green line is the desired position and no major overshoots despite the mean square wave. So speaking of rotation sensors, Austin Microsystems builds nice Hall effect sensors. They come in two flavors absolute and relative. So a 12-bit absolute revolution is quite nice but not enough. And so they build rings with 128 poles and you interpolate between two poles and lose the, yeah, it's not absolute anymore but pretty, pretty accurate. And if you combine both, so on one axis one absolute sensor to basically index the relative sensor you gain, yeah, 17-bit, absolute sensor which is at least in theory, I mean this is calculated, not measured, really, really accurate. And each sensor is about $10 and the magnets are $5 each. Another nice thing about those sensors is that they have direct quadrature output and all the modern microcontrollers have direct hardware quadrature inputs or decoders. And so you can just read the register and see the value of your sensor without any decoding protocols, whatever. A problem though is that we don't know how non-linearity will work or is a problem in high current switching because of the magnet fields and the EMI they will produce. And that shouldn't matter too much because the end effector, so the lag position won't depend on the error from the rotation sensor but on the mechanical errors like manufacturing errors, bending and flexing because it's not a rigid body. It's a real thing. And another thing is that the sampling time of the last measurement isn't clear if you do a digital readout, which you can do. But with quadrature output, that is okay because it's a fixed delay. So the quadrature output has a fixed delay from the last sample time but the quadrature output will stop working at higher velocities. So velocity calculation with those sensors is a bit of a problem if anyone knows a solution for that, tell us after the talk. Now back to gears. 100 newton meters is really a lot because if you just use your usual cordless screwdriver, they claim they can do like 50 newton meters, you will probably never reach that. You need like 30 newton meters to screw in, screw into really hardwood. Our motor talk is around two newton meters because we only calculated the newton meters, the motor talk right now. And we need a reduction of one to 50 maybe. That leaves us with not too many gearing options. One gearing option is a fancy thing called harmonic drive also called strain wave gearing. It's quite a nice thing. It's darn expensive. So one is probably starting at 1,000K but they are really expensive. And planetary gears are cheaply available. For example in your cordless screwdriver but have different, they have backlash usually. So if you turn the motor in a different direction, there will be play in the gears and that can break things because if the gears spins freely with that talks, usually just teeths are breaking from the gear. That isn't the problem at all with the harmonic drives and the harmonic drives, they are fly whites. But we try to get away with planetary gears and then there is one more actuator system. It's a linear actuator. It's basically like hydraulics but without hydraulics, you use a screw. Most common things are ball screws and then there's a fancy thing called planetary roller screw which has a planetary gear built into the screw. We won't go into details here because it's a talk for us if you're really interested in it. So to test all this, we've built a motor test bed and the motor test bed is basically a giant pendulum. It's a one meter long pole where we bolted on 10 kilograms of like handlebar whites. And so if you do the math again, the static torque is up to 100 Newton meters to test everything we want and the dynamic torque can obviously be much higher because it has two ports, one for axial motors like the harmonic drives and the planetary gears and one for linear actuators like ball screw systems. And yeah, as soon as we have the first tests, we will probably post a video material of breaking gears, motors and controllers and everything will burn. So obviously we had a look what exists in the academic field right now and two sample projects are Tulip and Lola. Tulip is from a university in Netherlands, the University of Eindhoven-Delft in Twente and it's rather small, 120 centimeters, 15 kilograms and uses an interesting concept named a serious elastic actuation in the knee which basically meant they couple a motor with a spring to conserve energy that is during the foot contact, to load basically the spring during the foot contact but they lose bandwidth with this so the controller is only five to 10 hertz fast. They get away with brushed motors because they are light enough and planetary gears and they have a predecessor named Flame for which I will show video shortly. So back to the kinematics. Norbert later earlier showed the kinematics for our robot project. It's comparable to this, basically the same three degrees of freedom in the hip. So this, this and yeah, you can not show that too good but you can turn your hip. One in the knee and two in the ankles, so pitch on and roll and the ankle is, the ankle roll is passive so they just attached springs to it and no motor. So now Norbert will play the video. Follow his name video. So this is Flame. As you can see, Flame moves in a slight curve. That's because as the predecessor of Tulip, it's oh, yeah, whatever. It misses one degree of freedom in the hip. So it has only two degrees of freedom in the hip and that means that it can't control it movement around the Z axis. So at the rotation of the Z axis. So it, as a result, it moves in a curve because it can't control it. Next up is Lola. Lola is a fully sized humanoid 180 centimeters, 55 kilograms, built at TU Munich. It has really many degrees of freedom, 25 in total but the arms and the head are actuated as well. And it has seven degrees of freedom per leg. The nice linear actuation concept is borrowed here. As you might see on the picture, number five is the ball screw and they use it to bend the knees and the ankles. Think of it, at least for the knee, like on your standard excavator. Yeah, like you have arm and tilt the arm by reducing the piston length or here the distance of the ball screw. They use harmonic drives all over. So everything that is not done by linear actuators uses harmonic drives and industrial grade brushless motors. The actuation concept with all degrees of freedom is here. Basically it's comparable, so at least for the legs, it's comparable to Tulip and our approach with the exception that they have a toe, active toe joint. They claim that this is useful to walk faster, not an actual target at the moment and climb stairs and steps which would be nice but should be doable as well without it. So, and as a special feature, the hip Z axis is tilted against the XY plane. You can see that in the kinematics. We have a video for this as well. So this is Lola and as you can see a posting to Flame, Lola never has straight legs but always bent knees. That is because Bushman, the guy who wrote the controller for Lola is who we based our work on, our controller work on. So it looks, yeah, they have arms and to conserve the angular momentum like Norbert told you earlier, looks way better, but yeah. Yeah. Sorry. We're only starting at this point. So, we'll do that later on. So the current status is, yeah, simulation is on a good way. We studied existing designs. We have built a workshop in the last year with a model of the Z axis. We did it last year with a milling machine, a small lathe, a fully-fledged electronics workshop. And the biggest challenge right now is to get an actuation concept that you can pay for. Yeah, so next steps next year will be to burn, like I said before, to burn motors, to burn drivers and to test gears. And after the gear question is solved, we will start constructing our first pair of legs. So additionally, in the last demo Norbert showed to you, the terrain was known to the robot and therefore we began building a camera system. This is the camera with a C mount to whoever knows what a C mount is. And it's a scientific camera based on the Apertos project who built a fully open source film camera in 4K resolution, which is quite amazing. And it's based on a CMOS sensor with 2K resolution, so full HD, up to 340 FPS, 12-bit dynamic color depth, and it has a global shutter. So not like on your favorite GoPro video where you see the prop of the plane flexing all over the screen. It can take, it takes the image of all pixels at the same time. And additionally, all of the design files are available on GitHub. Just if you know KiCat, you can just open it. We have it with us and we'll hack on the firmware for the rest of the event. And that's it. Okay, thank you very much. As always, we have microphones in the room if you have questions. So please line up in front of the four microphones. We also should have a signal angel in the room who can answer your questions if you have them on the internet. And so ask them on IRC on Twitter and they will be related here into the room. So are there any questions? None? Any questions? Well, can you please go to the microphone? Yeah, please line up behind the microphones. Right, thanks. Okay, so yeah, please. Yeah, question about the gears, actually. How about the, I'm not sure of the English term. Is this nail gear, schneckengetriebe? Say again? A warm gear, I think. A warm gear, yes, thanks. The big problem about them is they have very large reduction ratios but they're inefficient because there is friction. The snail is actually sliding against the gear. The warm. And as you saw, I mean, we have, as Wikipedia claims that they are, I don't know, 50% effective or so. And if you have 250 watts, then this thing is going to get really warm. So this is the reason why there would otherwise be ideal there, for example, used in windshield wipers for cars. But I think our power requirements are too large. And additionally, they are self-locking. So if anything collides with your leg and your leg isn't in the stance you thought it was, it will break because the gear won't, it won't move. Thanks. Okay, from the Mr. Björn. Yeah, I just was wondering, all of the robots are essentially humanoid robots. I was just wondering if other bipedal creatures were considered like maybe an ostrich, which evolutionarily speaking, is actually better for bipedal walking than the human being is, just curiosity. Well. And can you give me tips for avoiding the robot apocalypse? Well, as the famous XKCD puts it, I don't think this is a problem at the moment. So just climb up a tree and pour down some water. But regarding the ostrich, I think from the point of view of the rather simplified physics, it's much the same because it also has two legs and the legs similarly can move its feet. So I guess the physics at this very abstract levels probably quite similar and riding a controller would also probably be similar. So I'm not sure if it's from a purely mechanical point of view, much different from a human, other than that the human has arms and it has wings. And as you saw, I mean, we just think of the torso as one big rigid body and don't model the details. Does this answer the question? Okay, another question from... Yeah, it's a question from the IRC. Okay. Are there any plans for straight legs, robots, or are they gonna stay with bent legs? Well, this depends on the controller. So the problem is if you have straight legs, then you lose one degree of freedom. Because essentially you can't independently control the foot position anymore. Did you just have one degree of freedom left? Which is clear, of course. I mean, if your leg is straight, then the distance between the hip and the ankle has to remain constant. And this kind of complicates controller design. So at some point we might get around this, but right now it's probably simpler to stay with bent legs. And nothing of this is cast in stone, right? So this controller is not really optimized for building an actual robot yet. At the moment we're experimenting. So this might be considered in the future, but at the moment this so-called singularity puts us off it, even though it looks non-human. Okay, and a question right here from the front. Hi, first of all, thank you for the talk. Then I have a question. You said that you implemented the algorithms from Roy Featherstone's rigid body dynamics algorithms. And I think there is a library that exists, the RPDL, Rigid Body Dynamics Library. Do you know of it, or did you have a look at it? I know of it, I had a quick look at it. It seemed to me that it was quite slow. And so the reason why we implemented this ourselves is because we only consider kinematic chains at the moment, not trees. And I think this makes the low level algorithm faster because you can just put your bodies in a linear list. And the RPDL I think really allocates a tree structure because it can deal with the more generic case of kinematic trees. I did not actually profile it. I built it and ran it on the example and it turned out to be relatively slow. I don't want to insult anyone, maybe that's my fault, but I suppose this is because in the lowest loops it actually does a lot of unordered memory access, whereas if you do kinematic chains, you just can go to the rigid body one after the other. So ours is far less generic. It's really just for this robot and not much else. I don't know. Thanks. Okay, do we have any other questions from the internet? Yes, one more from the IRC. Is there or will there be ROS integration? Eventually, yes, I guess. One thing about this, so all these motion playing algorithms that we heard about in the last talk, they're usually focused on static trajectory. So for example, like my arm, I am an industrial robot, my arm is here and I want to get it here. This is a typical collision avoidance problem because I can't just move straight, I have to move like this. And I think this is the kind of problem that ROS, or not ROS actually, but these supplibraries that were mentioned focuses on. The thing about this kind of trajectory is that it doesn't matter how fast you execute it. You can just go slow or fast as long as you stay on the trajectory. And when I looked at it, it seemed that it didn't have algorithms for our kind of problem that actually matters how fast you execute your trajectory. Because just because some walk cycle works, it doesn't mean that it will continue to work if you go half as fast because the reaction forces change. You actually have to consider the dynamics. And all these sampling-based motion playing algorithms at least as far as I know, don't typically deal with this. That being said, of course, I mean it's still a very nice option to use ROS, for example for this camera project. Because... You can feed it with raw image data and it's, at least that is what the last talk claimed. And yeah, the whole image pipeline is within ROS and that sounds really convenient. And there it really makes sense to avoid this kind of duplication. Okay, are there any other questions? It doesn't look like it. So I ask you again for one round of applause for Felix and Norbert.