 We have Jorge Apericio, he's going to be presenting on Rusty Robots. It's went through the slides, it's going to be very cool. So please give him a warm welcome. Thank you. We don't have as much time I would like, so we'll dive in right into. Yeah, sure. Can you hear Madhavak? Yes? No. Should I shout? Yeah. Okay, I will shout. Okay, so this is the robot, and this is the front view, and you can see it's a wheel robot and only has two wheels, and the mess of wire on the top is the electronics that will control it. There's some cortex in my controller in there, among other things, and this is the side view, you can see it only has two wheels, no extra support point, and the robot is clearly unstable, it doesn't matter which position you put it on, it will always fall to either side. This is similar to a common problem in control theory, which is the inverted pendulum problem, and in that problem we have an inverted pendulum on top of a moving car which you can move using this force F. Now the pendulum will fall due to gravity, and to compensate that falling action, what you have to do is if the pendulum is moving to the left, then you also have to move the car to the left to compensate. So the problem boils down to picking force F according to the tilt angle theta that you see there. Okay, so we'll need to find that tilt angle, and for that we have an accelerometer on the robot, and as the name implies, an accelerometer measures proper acceleration, and the sensor I'm using is the NPU 9250. Now, how do I get the tilt angle from the acceleration? And accelerometers, even when they are not moving, they always sense the acceleration of the gravity, and on the picture on the left, you see that the sensor is horizontal, and in that case the accelerometer is going to indicate that the acceleration across the X and Y axis are zero, but across the C axis is one times the gravity or one G, and that's going to be a field angle of zero degrees. And now on the right, you see that I have tilt the sensor by some angle, and in that case the reading across the Y and C axis are going to be non-zero. And if you do some trigonometry there, you'll find out that the angle is the arc tangent of the ratio of the Y and C components. And let's see how that works out in practice. This is data collecting from the accelerometer when it's horizontal and it's not moving. On the top, you see the acceleration across the Y and C axis, and the data is noisy, and at the bottom you see the arc tangent formula from before, and that's going to be the field angle. And the angle is around 2.9 degrees, and that makes sense since the sensor is horizontal should be near zero. Now, what happens if I start moving the accelerometer, and this data is from moving the accelerometer on an horizontal table. So in that case the field angle should still be zero because I'm not tilting the sensor. But what we see here is that once you compute the angle using the formula from before, you get a lot of oscillations, and this is clearly wrong. And the problem is that the formula from before was assuming that the only acceleration measured by the sensor was the gravity, and that's not going to be the case as soon as you start moving the sensor. Now, the accelerometer is not enough to get that angle, so we also have a gyroscope on the robot. And a gyroscope measures the angular rate or the speed at which the sensor is rotating. And this is perfect because with this we can measure exactly how the angle is changing. And the same sensor from before has bought an accelerometer and a gyroscope. Again, this is data from the sensor, horizontal and without moving. At the top you have the angular rate, at the bottom you have the tilangle which you can get from integrating the top signal. And you see there that the angle says that it's increasing as time goes by. And that's wrong because the sensor is horizontal. And the problem here is that the gyroscope says that the angular rate is non-zero. And that's common in this kind of sensors and it's called bias, and the offset in the measurement is called bias. And what you have to do is calibrate the sensor by removing the bias. And this is the data once the sensor has been calibrated and now the angular rate is around zero and once you compute the integral at the bottom the angle is also around zero which is the correct result you want. Now, the accelerometer and gyroscope have problems on their own but what you can do is combine both measurements using a technique called sensor fission to get a better estimate of the angle. And there are many ways to do sensor fission but a Kalman filter is appropriate in this case. Now we're going into the details of the map behind Kalman filters but I should say that they are not actually filters but they are system state estimators. And for this Kalman filter I have chosen an state of the field angle and also the gyroscope bias. And here we see a simplified interface to the Kalman filter, has submitted some tuning parameters from here and the filter has to start with some initial state which is the angle and the gyroscope bias. And then every time we have a new measurement we are going to update this filter and what the filter will do is try to predict the next state using its previous state and they will compare that to the measurements from the gyroscope and the accelerometer and use that information to get a better estimate of the angle. And this is what the Kalman filter looks like in action. Again, this is data from the sensor horizontal without moving. And the blue dots are the field angle computed only using accelerometer data and the green line is the Kalman filter. And both say the same thing that the angle is around 2.9 but the difference is that the Kalman filter has much less noise and it has one order of magnitude less noise. Now we have another example where I move the sensor from a position of zero degrees to 90 and the blue line is still angle from only the accelerometer and green line is the Kalman filter. As you can see the Kalman filter, you see it has a smooth transition from zero to 90 but the accelerometer has oscillations along the whole transition but we're using the Kalman filter here. Okay, now we have the angle and we have to move the motors to be able to stabilize the robot. And for that we use this piece of electronics called H-bridge and this model has two H-bridge and we can use that to control the two motors on the robot. And with H-bridge we can control the direction of the motor and H-bridge basically is just four switches arranged as you see on the screen and on the left we have one of the possible states of the H-bridge and in that state the power supply is applied to the motor and that will apply some voltage and make the motor move and in the state on the right, the voltage is also going to be applied to the motor but with reverse polarity that will make it spin in the other direction. So it has four switches and you could get 16 different possible states from that but in practice we only use four states. The two you see there and the other one is when you have everything open and the motor is disconnected from the power supply and the other state is when you short circuit the motor by say closing the two switches at the bottom and that will make the motor break so it will stop almost immediately. And with H-bridge you can also control the speed of the motor which we are going to need in this robot and we can do that using this technique called Pulse Width Modulation. And the main idea is that instead of having the motor connected to the power supply the whole time we are going to connect it just for 75% of the time say and the other 25% we are going to leave it disconnected. This is going to transfer less power into the motor which will make it spin slower. So this ratio between the on time and the total time is called the duty cycle and it can go from zero to 100% and so 100% will make the motor spin at full speed and zero will make it stop. Now we have the two pieces we have the angle and we can control the motor so how can we pick a duty cycle to stabilize the robot? We can use this PID controller and there in this diagram the process on the right is the robot and the variable Y is the tilt angle measure and on the left we have this variable R which is the set point which is the angle we want the robot to be at so if we pick something like zero that will make the robot stay upright. And the difference between those two is the error and that error is going to be called by these three PID gains and it's going to turn into this control variable U which is the duty cycle we applied to the motor. And now everything here is going to be compute at runtime except for the PID gains so it has been selected before running the PID controller. So if you pick the right gains for the PID controller then we get something like this. We get a stable system, the robot no longer falls and here we have data from that previous video and you can see that at the top we have the tilt angle measure and the blue line and the green line is the set point we chose and in this case it was 10 degrees and the action of the PID controller is going to try to stabilize the tilt angle so it will try to match the set point. At the bottom you can see the duty cycle as chosen by the PID controller and there a negative value means that the motor reverses its orientation but that's not the only possible outcome if you are trying to guess what the correct PID gains are. So if you don't show them correctly you get something like this, an unstable system. And what you got there is oscillatory behavior. So instead of having the tilt angle converge to the set point you get this oscillation around the set point. And well that's something you don't want. So let's continue with the stable PID gains. So before we have an stable system but the robot didn't move but at the end we want to be able to move the robot. So what should we do to move the robot like this for example? And the only thing we have to do is change the set point. So before it was 10 degrees and that gave us almost no motion but choosing a value of four in this case is going to make the robot move. So at the top we see again the tilt angle stabilizes around the set point and at the bottom we have the PID output which is the duty cycle which stabilizes but this time it stabilizes to a non-zero value and that gives us a speed on the motor. And a value less than 10 gives us a forward motion and if we chose a value larger than 10 that will give us a backward motion. Okay, so at this point of the talk you are wondering okay this is cool and all but wasn't this talk about rust? So now let's talk about how rust helped build this kind of application. And in this diagram you see the microcontroller in the center and the other components are the external components to which the microcontroller is connected to. And each edge in this graph is one of the microcontroller pins and the direction of the edge indicates whether the pin is configured as an input or as an output. And the label indicates what the functionality of that pin is and the bottom line here is that you want to configure everything correctly otherwise your system will not work and you will have a hard time figuring out what is not working. And Roscombe helped here if you design your API like something like this and in this program we are going to set up serial interface and for that we need to use two pins, a transmission pin TX and a reception pin RX. Now in the first line we are going to take all the peripherals of the microcontroller into the current scope and on the second line we are going to take just one peripheral which is the GPIOA which is in charge of configuring the pins of the microcontroller and we are going to split that into 16 independent pins. Now the first line is going to change the pin PA9 is going to put it into alternate push-pull output mode and that pin we are going to use for the TX pin. And important to note here is the type of the TX variable and then you can see the pin named PA9 but you also have this parameter inside which says alternate push-pull and that's the state the pin is in and this technique of putting the state of your sum value into the type is called a type state. And the line number four we have simply assign the pin PA10 to the RX variable and we have again the name of a pin in the type PA10 but the state is different, it's in input mode. And the next thing we do is we create a serial interface and to create that we pass both TX and RX by value. Now this constructor is written in such a way that if you have in configure your pin correctly in the right mode, then this is one compile because it has to have a specific type to be passed here. And that means that you cannot do the configuration incorrectly because then your program won't compile. And another thing you get here is that for example, the last line tries to change the mode of the RX pin into output mode and that will break the serial extraction because the serial extraction expects that pin to be in input mode. But you cannot do that with this API because when you constructed the serial abstraction you pass TX and RX by value. So now the serial extraction owns both pins and you cannot configure them to be any other thing. And one other thing Ross helps with is generic drivers. So the mic controller has to interface this external component, which is the accelerometer and gyroscope. So instead of writing code or a driver that lets my mic controller interface with that component I chose to write the driver using generic programming so that it can work with different platforms. And the key here with generic programming are traits which you can see and this SPI type parameter. It is bound by these two traits and these traits are interfaces. And they basically say you can construct this driver as you pass me this type which implements this SPI interface. And that means that as long as I provide a type that implements the interface then it could be implemented for a mic controller or for a Raspberry Pi. This driver doesn't care about that, about those details. So this driver can be reused across different devices or platforms. And now, so the communities putting together this embedded help right where we have is just a bunch of traits which represent different abstractions you have and embedded systems which are CR interface, SPI, I2C bus. And the ultimate goal here is code reuse. So as I mentioned before, driver rather simply writes the driver using these traits. And they will support any platform that implements these traits and they don't have to write any platform-specific code. And the benefit for the developers who are targeting some platform is that once they implement these embedded help traits they get for free all the generic drivers that are built upon that. And right now we don't have many drivers published on crates.io but the communities working together to get a lot of them out this year. And the communication model is one model where Russ really shined. And I'm using this model for communicating wirelessly between the robots and my laptop or my phone. And this model accepts a CR interface from the microcontroller which is a simplified interface. So then the microcontroller doesn't have to implement the Bluetooth stack. Now I use this mainly to log data from the robot so then I can analyze that later on. And this model is limited to communication speed of around 10 kilobytes per second and I want to log data really fast. So I have to show format that will let me do that. And I show binary format just directly translates all the types into binary format. But in my application I didn't have to write any binary serialization functionality. Instead I simply grab this biodecrate from crates.io and I use that to serialize the data into an array. And now this works and I can send data in binary format to my laptop. But there is a problem because this is a wireless link. Data might be dropped if the robot set gets too far away. So to solve that problem I can add frame delimiters to my data before sending that. And for that I'm going to use this cops frame crates which is also on crates.io. And this algorithm basically adds frame delimiter which is usually zero and then translates the rest of the data so it doesn't have any zero in there. And it both provides a way to encode and decode the frame. And once I had that then my data was probably frame delimited but then I started wondering could it be that I lose some bytes and I still get a valid frame on the laptop and I will still get a young data out of that. So I added a checksum to my data to verify that the frame is actually what I wanted to send. And again, I didn't implement that in my application. I simply grab checksum crates.io in this case the CRC16. And here you can see the full code. I sterilize my data then compute the checksum, append that and then turn that into a cops frame and put that on the wire. One of the last parts is concurrency. So I have to do multitasking on the micro controller. And I only have to do these two tasks. And for that I use this real time for the masses or RTFM framework. And it lets me do tasks on top of inner handlers. So it's basically a hardware based scheduler. So it's really fast and efficient. But since it's raw, I don't have to worry about data races. So I have these two tasks. One was periodic where I read the sensors, update my comma filter, update the P80 component and log the data. And then I have this other kind of asynchronous task where my laptop sends some data to the micro controller and I use that to change the P80 game because I was manually tuning the games. And when you use inner handlers, and you need to share data between them, then you have to use static variables. And static variables are troublesome one because you might run into data races. But I think a major problem from my point of view is that they make the code not readable because then anything can modify the variables. So you don't know who has access to the variable. But with this framework, we have this declarative app macro where you define all your resources which are nothing else than static variables. And then you declare your task. And you assign the resources to the task. So then when you're writing the task body, which is at the bottom, then task task only has access to the resources it was declared in the app macro. And if you try to access any other resource that wasn't declared for that task, you will get a compiler error. And the framework also takes care of, if you have a resource is shared between two tasks, the framework will take care of ensuring that the access is free of data races. For example, the artist task at the bottom to access the PID resource is shared. It has to use a lock to achieve data race freedom. OK, some other random stuff. The CPU usage of the migrator was around 21%. The CPU was running at 64 megahertz. And it had no FPU. And the control loop was running around 500 times per second. Binary size, my application was around 400 lines of code, excluding the code from the dependencies. Everything that ended in the binary came from roast source code. And the binary size was around 8.5 kilobytes in flash. And actually, two kilobytes of that are due to software emulation of float arithmetic because I don't have an FPU. Run was 140 bytes. And I didn't use any dynamic memory allocation. At the bottom, you can see the biggest symbols. And among them, you will find this software emulation of IEEE floats. Finally, this is the dependency graph of the project. It has around 20 dependencies, excluding build dependencies. So there's a lot of code you're using there. One thing I found scary is that most of those grays have been written by me, except for like three or four. But if you were writing this thing, you'd have to do all the work of writing the dependencies. OK, in conclusion, roast is small enough so it can fit in a micro controller. It's also performant enough that you can implement this time-sensitive control system on a resource-constrained device. It's also memory-safe. You can do multitasking without having to care about data raises or whatever that. It also lets you write more correct code. As we saw in the PIN configuration, you cannot get it wrong with the API. You can also easily use third-party code, which we use a lot in the communication module. And it's also good for code reuse. We have this genetic driver, which can be used in many different platforms. And that's all I have. Thank you. I mean, we have time for questions. Yes? How easy was it to cross-compile the code for that target? So Rust is already a cross-compiler. Oh, yes. The question was, how was it to cross-compile to that target? So the thing is that the Rust compiler is already a cross-compiler by default. So I didn't need to do anything special. I could already generate machine code for the ARM context, a micro-order. The only external tool that I needed was a linker, because Rusty doesn't include a linker inside. So I used LD, ARM-known ABI, LD. And then that was the only external dependency I had. Over here. What was your background before you made this? The question is, what's my background for doing this? And actually, I have a passion in mechatronics engineering. Yeah, basically, I took several semesters of control theory. So this is the kind of stuff we do there. Yes, at the back? Yes, you. OK. The question is, so this was using a Cortex mic controller. How will it be to support other architects and also other mic controllers, right? OK. OK, only about the Cortex mic controller. So to get more device support, we have a tool called SVD2Rus, which, so vendors give us this description of all the register in a mic controller in this SVD format, which is a Finestan XML file. And we can translate that into Rust code that let us use all the registers on the mic controller. So that gets like 90% of the work done. And if you have the SVD file for a mic controller, then basically you can already do IO and use the registers. And on top of that, you will want to build something slightly higher level, because manipulating the register can be error-prone. And for that, we have these embedded HALT rates. And if you implement a HALT that uses that interface, then it's like 10% of the job left. And once you do that, then you get access to these generic drivers. Right now, people are mainly using STM32 mic controllers. I have seen some people using like NXP, LPC, and a bit of SAM from APR. But the boards that have most support are the blue pill, which is the one I used here, and a few of the discoveries. Is there another question? It's in robotics. We're going to have a virtual trailer in about 30 minutes. So if you are roboticist or just love robots, see you there.