 Hello. Welcome to 10C. A couple of reminders that you've probably heard everywhere else. If you're looking for an exciting career in tending a bar, then you could start tonight, because we need volunteers in the bar tonight. Also, music has to be off, amplified music has to be off by 11 tonight. Otherwise, we may not have EMF camp again. Licences and what have you. Anyway, I'm pleased to say we have, we actually have the name of the person speaking, which wasn't on the wiki, so we have Mick Amos from Matlabx here to talk about building with robots with Matlab and Simulink. Thanks so much. Yes, so hello. My name is Rick. I'm a software developer from the Mathworks and a fellow hobbyist. And if you're wondering who Mathworks were earlier on, or when you look at your badge, we are the people who do Matlab and Simulink. So, you might have come across Matlab or Simulink in, say, a university or industry setting, or if you're struggling to think of where you've come across the word, but you've not actually come across our software, you might have been a victim of some of my late night marketing. I was a guy who was watching around with an LED dot matrix on his back, and every so often I would sneak in an advert here or there, just to play people off. So, okay. What I want to do for the next 20 minutes is show you how you can use Matlab and Simulink to build a basic demo, and pretty much to show you what advantages those two tools can give you in doing this. So let me first begin for anyone who hasn't seen them before what is Matlab and Simulink. Simulink is a mathematically based scripting language. It was originally designed around matrix algebra, that's where the name comes from, it's matrix laboratory, but it's since ballooned out into all sorts of data analytics and other processing needs. Simulink is its sister product, and this is the tool you go to if you want to model a system over time. So, for example, the software in your car or the software on a commercial plane was likely programmed in collaboration with Simulink, or on top of a Simulink model. Both of these tools are available for home use, so you can purchase them for hobby projects, so that's why I want to show you one example of something that you can do with these. That example is thus. So you've got a robot and you've got a tennis ball, and all the example is that the robot likes the tennis ball. It forever follows where the tennis ball goes. That's the demo, that's all it is, and what I'm going to show you is how you can get to something like this using Matlab and Simulink. The hardware is pretty simple, it's a Raspberry Pi with a Raspberry Pi camera and a standard rover kit, so this is the mic-made version that you can purchase for about £30. The idea is pretty simple. You take the video feed that's coming from the Raspberry Pi camera, you do some analysis on that to figure out where in the robot's vision the tennis ball is, and then based on that information you decide what the two motors want to do to get the robot to find where the tennis ball is. This is a Simulink model. This is something that you can compile and run on a Raspberry Pi, or an Arduino, or any other hardware that you want to target, and what I'm going to do is these are the four stages of what it actually does. There's a bit more detail behind visual and a bit more detail behind controller. I'm going to show you how to solve these two problems. The first one is visual analysis. How do you find a tennis ball in a video feed? If you do a Google search you'll find a couple of algorithms, there are various different ways to cut it, but the simplest of them is, well, you start off with a picture of a tennis ball, in this case I've given you two, one which is off a different kind of lighting as the other, so you can bring this into MATLAB using this snapshot function from the Raspberry Pi, you then figure out, okay, the top one makes the tennis ball look orange, so you want to get rid of the illumination effect if you can, and what you're doing is, what is the yellow pixels in the image? So you convert the image to a hue, saturation, and value, and you filter out the pixels so that what you get left are the pixels of yellow hue. And one final step which is, well, pixels aren't enough, you want to get rid of detail like the lines on a tennis ball, or when you've got a bad image, you get this kind of, you only see texture rather than solid blob, so you can do image morphological operations, so things where each pixel is the output of some operation of the same pixel from the input and its neighbouring region. So you get to something like this, where each object is now just a solid blob, and, well, finally, you just pick out what is the biggest yellowest thing, or the biggest yellowest circular thing in your vision. So this is something you can kind of bring into MATLAB, you can figure out what are the steps you need to do to get to where the tennis ball is, and then you can take this MATLAB code, and you can insert it into the simulator, or you can take this idea and build the similar equivalent, both different paths that you can do. What do you do with it? Well, one of the things you probably want to do is prove that it works, separate from actually trying to get the robot to behave. So one of the other things you can do is, this visual analysis and the camera, these are the same blocks as from the previous model, but the right-hand side I've just inserted in, is build a diagnostic video, so build a video of what the robot sees and where it thinks things are, and then stream that video from the Raspberry Pi to something which has a stream. So stream the video from the Raspberry Pi to my laptop, say, over Wi-Fi, and you get something that looks a bit like this, that it sees where the ball is, this is the accumulation of all those four steps, and the Raspberry Pi can see when the tennis ball is. Suppose that this is quite well tuned, I've picked the hue values pretty nicely. Let's say I got that wrong, and I wanted to play around with it a little bit more, because this is similar, one of the features similar it has is it says, while the robot is currently running, I can attach my laptop to it in such a way that I can go down and I can say, well, I've got the hue filter on, so I'm going to change it. So in this case, this is a fairly narrow band of hue around yellow. Maybe I want to include some more green colour so I can increase the upper limit of the hue saturation, and when I do that and I hit OK, the live video will show the effect of that change, so I can play around with it and I can see how modifying the particular values has an effect on the robot. That was the visual side. Let me now move on to the second problem, which is, well, now you know where the tennis ball is, how do you move to it? And to kind of demonstrate that this is not a non-privile problem, or this is not a trivial problem, I'm going to show you a couple of naive controllers. So this first example is, given I can see the tennis ball if it's to my left, turn the motors on full whack, turning me to the left, if it's to my right, full motors turn into the right. Sounds simple enough, right? What could go wrong? Well, that isn't exactly what I wanted. As it's always OK, turning the motors on full whack probably isn't going to solve it. So, all right, that was silly. Let's try something a bit more clever. Instead of full whack, why don't I scale the amount of power I give to each motor based on how far it is away from the centre? So, if it's way left, it's motor full whack, if it's a little bit to the left, I put a fraction of the power on. Let's see how this one does. So it converges. It gets close. But it's got this oscillatory motion. It's overshooting to the left, it's overshooting to the right. That's weird. OK, so the naive approaches aren't working, and you can continue to try this a bit more, but actually what you want to do is, well, you need to have an understanding about what's actually going on. So there's a couple of things that you can do. The first one is, instead of trying to do this continuously on the robot, you could say, for the sake of I want to prototype a controller, I'm going to build the entire thing in software. This is the same controller block as before, but what I've done is, instead of have hardware blocks that are feeding into this controller and taking its output, I now have a software emulation of the robot. And what you want to do is, well, this is something that you can now run in software, this is now something that you can fast iterate without having to compile each time round, and it drives you towards to get a proper emulation of the robot, you need to have an understanding of all the bits of hardware, so you can sort of delve in down into, right, how does the camera behave, how does the motors behave, and figure out those individual pieces before going back to what controller you write to work this problem. So, I'm going to give you a hint as to why it gave that oscillator emotion. This is something called delayed response. This is a video feed of what the Raspberry Pi sees, and on the right is going to be a graph of the motor input, so watch the line in red compared to the video feed. So did you see how early the red line, how late the video feed came after the red line? Let me show you it again. So it's doing nothing, it's doing nothing, and now that's the motor going and that's the response. So it's about a third of a second between when I put some motor input into the robot and I actually see some response, so I need to calibrate the controller to figure out okay, it needs to predict where the robot's going to be and slow down in advance of where it wants to go. So you can kind of build this into your simulation. In this case, I'm just showing you a bit more detail, ignore the majority of it, when you build a simulation of a robot you use those motor inputs to figure out a dead-wrecking position of where the robot is and what the orientation is, and from that you can build what does it see. But for this delayed response problem, well, all you need to add to simulate that is these three blocks on the right, which are delayed blocks. Whatever the signal is at a particular time, output the thing five frames ago from it. So we can have this as an emulation and we can try to solve the problem purely in software and you can try different things out, so you can try, I don't know, imagine that you try to predict where the physician's going to be by adding some multiple derivative that works to a certain extent. If you do a bit of searching around, the thing that most people do is they use this thing called a PID controller. It stands for proportional, integral and derivative. So imagine the further away you are from your goal, the more force you want to apply, but if you're not moving, you might need a little bit more force, so you want to be increasing the force over time until it starts to move and finally you want something which puts brakes on, so you want something based on the derivative to say the faster it goes, the more force you want to apply in the opposite direction. So this is what a PID controller does and you can take one of these blocks and you can tune the three parameters of how big of an effect each of these three forces have on the problem and you get to something which, at least in theory, gives you the perfect centering of the robot. You can see here that its initial guess as to how much force you need to apply wasn't quite enough, so it kept increasing and increasing and increasing until it saw a response and then it appropriately decreased the power until it centred on the tennis ball. Again, like the visual analysis, this is something you want to try with a real robot. These are parameters that you want to tune and again, like the visual analysis, I can take this model, I can take this controller, I can run it on the Raspberry Pi connected to my Simulink and well, if it's behaving weird I can go down and figure out or I might need to increase, say, the derivative to give it more braking power or I need to decrease the proportional field because it's overshooting. So I can do things like this while the robot's actually running and trying to find the tennis ball. So putting it all together, once you've solved these two problems, visual and controller, you can put everything together with the camera and the motor outputs and this is now something, like I said before, you can run on the Raspberry Pi and you can generate code from it that you can then leave on the Raspberry Pi and it's now a fully functional robot loving all the tennis balls it can find. So that's pretty much what I had to show you that this is how MATLAB a analytical script in language and Simulink which is an application around modelling systems over time how these two applications can help you with a basic hobby project like making a robot that loves tennis balls a bit too much. So thank you for listening. Great, thanks very much. We have a few minutes for questions so if anybody has questions, hang on. Hang on. I'm afraid not. It's all down to native language. So it produces C codes. It produces C and C++. You mentioned that it's available for hobbyists. Where can I get it and how much is it? I can't give you exact pricing because I'm not a sales person so you can go to our website and if you look at there should be links from there to home use licensing and you're talking of the order of say £100 for MATLAB £100 for Simulink and a certain amount for each add-on tool that you wanted to do as well. That sort of order. Okay, not the three grand but it's cost commercially. Other questions? Cool. Okay, well I have just two reminders the same reminders which are about the bar and also providing feedback so it would be great to get feedback so you can provide feedback on the EMF camp websites slash feedback and thanks once again to Rick and thanks very much for coming.