 I tried to make a cute title because I saw a lot of past Python presentations had these cute little play on words. So hello everyone, I'm Nicholas Nadeau, and today I'll be talking about two of my favorite topics, Python and robotics. So why robotics? Well we're currently in the industry 4.0 area of manufacturing technologies. Data exchange is everywhere. Every robot motion, joint angle, force is being recorded, analyzed and processed, and with the advent now of collaborative robots such as Sawyer and Baxter, humans don't need to be physically separated from robots anymore. They could work side by side with their human coworkers. And this is a whole new paradigm shift in terms of robotics because now it's they're not caged anymore. And as we all know, Python is pretty good at pushing the boundaries of new concepts. So the costs of robots are coming down, and that's part of the reason we're seeing more and more robots out in the field in small and medium-sized enterprises. And for example, Kickstarter is hosting a bunch of robots that are a few hundred to a thousand dollars. So that being said, what's the difference between this URM on Spark fund for about a thousand dollars and this hundred thousand dollar KUKA collaborative robot? Well the KUKA robot is designed to be run 24-7 for decades. It's reliable. It has much stiffer joints, very high quality gears going to make it more accurate. And to put things in perspective, today we're talking about industrial robots like this. Unfortunately, not these fun guys in my presentation. And so my presentation today is going to be a high-level overview of robotics and calibration mixed in with the Python, and hopefully everyone leaves here learning a little something new. But if you really want to dig deeper into the theory, pick up Craig Robotics, the third edition, one of the best books I've ever read. So who am I? I'm a mechanical engineer, biomedical engineer, turn roboticist, and then I discovered software a couple years ago. I work at the intersection of mechanical engineering and medicine. And at Rogue Research, I develop neurosurgical robots for small animal research. And I'm also doing a PhD at ETS in precision collaborative robots for ultrasound, a human collaborative ultrasound. And this will be my first PyCon coming up, and this is my first Python meetup. So if there's one thing I want everyone to leave here today knowing, it's the difference between accuracy and precision. precision is another term for repeatability. Robots are really good at that. Given a taught pose in space, tell the robot to go back to the same pose over and over again. And usually most robots, industrial robots, it's measured on the order of microns. Accuracy on the other hand, robots are less good at out of the box. And it's when you tell a robot to go to an arbitrary position in its workspace, most out of the box robots, this will be measured on the order of millimeters. Now, it might sound small, but for high precision application, high accuracy applications, this is a big deal. And manufacturers don't usually spec the accuracy because this metric doesn't really look good. They rather the precision numbers. So why is this important? Well, if you're building a surgical robot and it's not accurate, you're going to have a bad time. You'll probably miss that brain region that you really want to target with an injection. And if you're doing industrial robotics and you're trying to pick in place or machine tending and you miss the part in your CNC and it's still left there when the next part goes in, you're going to have plenty of collisions and poor factory floor performance. So in order to save time and money, we have offline simulation of robots. And in the factory, time is money. You don't want to shut down a whole production line to test your latest batch of code. There's no real continuous integration yet on the factory floor. So offline programming uses the model of the robot to simulate the robot operations and motions. But if your model isn't representative of what happens in real life, isn't representative of the accuracy of your actual robot, you're going to have a bad time. So this is, I promise you, the pretty much the only math except for the couple of matrix stuff in the Python later. How do we model a robot? Well, my personal favorite method is the modified DevNet Harmburg parameters, MDH. It's one of the most common methods. It uses the joints and describes transforms along the kinematic chain. This model you see here, the alpha A theta D, is an actual UR10 robot. So UR10, one of the most common collaborative robots, we'll see in a bit. This is what the manufacturer intended the robot to be mathematically. The actual robot in real life, depending on which one you have, it could be different from robot to robot, will have little errors intersparse throughout here, and that's where your accuracy errors come from. So let's take a quick look at robot motions. At the heart of robot motions, everything is a joint motion. Everything is angular around the revolute joints. So making Cartesian straight lines isn't trivial. And so we see these little curvatures. So as I said, making straight line motions, Cartesian space, isn't trivial. And this brings up the concept of forward kinematics and inverse kinematics. Forward kinematics is given a set of joints. What is the actual pose in Cartesian space? Inverse kinematics is very hard. Given a Cartesian pose, what are the joints required to create this pose? A lot of robots, especially UR10, it has a real hard kinematic solution, doesn't have a closed form kinematic solution. So we've got to use iterative methods in order to solve for it. And so quick little recap in robot theory. There's a lot of math. Modulation lets us save time and money, improve performance. And if accuracy is important to you, you better calibrate. So into the Python part. Let me introduce Pybotics, my contribution to the Python community. It's a toolbox for modeling robotics and robot kinematics. It's built on top of NumPy to take advantage of the ND arrays and the matrix operations. And it's also fully typed and takes full advantage of MyPy, something I enjoy a lot. So why did I do this? Well, reason number one, coming from engineering, and McGill thinking that Fortran and C are the most important programming ligars in the world. Once you get into the real world, you realize you don't have MATLAB toolboxes everywhere. And Peter Cork, an Australian roboticist, made this amazing MATLAB toolbox for robotics that I could not get access to everywhere I wanted. And I also wanted to build something that would integrate with the scientific Python community, such as NumPy, scikit-learn, scipy, and TensorFlow, and now use those modern machine learning optimization libraries. So for the fun Python part of today's presentation, here's a UR10 robot. I couldn't bring it here today from my lab because it's expensive and a little bulky. But we're going to look at real data from real robot that I recorded not too long ago. And so this is one of the most common collaborative robots in the world. The company Universal Robots was just purchased by Teradine for $500 million. They've only been around 10, 12 years. They make three robots, exact same robot, three different sizes, essentially. This is the biggest size. And they sell about 1,000 robots a month. So this R2-D2-looking device is a Faro IN laser tracker. It's accurate to about 15 microns in a sphere of 110 meters. And the sphere I'm playing around with there was equipped to the flange of the robot. The robot was moved to random locations in its workspace. And I think I got a couple hundred recorded poses with an accuracy of 15 microns. And so we're going to use these poses to calibrate our robot. So to Jupiter. Let's see if I could do this well. Yeah, that worked-ish. All right, so imports. We know all this. This is going to be on GitHub shortly. So the first step to doing a robot calibration is to define your optimization function for the calibration. And so for this step, we're defining a fitness function that will take the measured positions in Cartesian space that we got from the laser tracker and compare them to the forward kinematics from the given transforms and joint configurations that we record with the robot. And what we want to do is starting with the nominal model of the robot. That's the mathematical model that the manufacturer gives us. We want to converge to a solution where our mathematical model matches the real life. So the world frame, standard four by four transform matrix, was very well measured in my lab. A lot of times, a lot of your errors will actually come from just where you place the robot and how good your measurements of your world frame are. But let's say for the sakes of this presentation that my world frame is very good. My tool frame on the other hand came from mechanical CAD data. And this we're actually going to do a quick calibration of prior to calibrating the actual robot. So using good old pandas to load up the CSV files of everything and the tool positions all loaded. Let's, oh, I should probably run the kernel. Yeah, let's just run all cells. Yeah, live dangerously. All right. All right, so let's, there we go. It works. So positions, I have my joints. Oh, these are the robot ones. We'll get the sheets to the bottom. All right, joints, laser joints. So laser are the actual measured position space. I'm using as a refer frame where the R2D2 robot actually was. A lot of times you might create a second world reference frame that's more rigid than a robot on a tripod. But for the sake of this, the reference origin was the robot, the laser tracker itself. I measured 151 robot poses. This is the tool transform. This is the nominal tool transform because I had to have the robot move and point towards the laser tracker with each motion. Using the nominal, the laser trackers is still able to find the sphere. And this is what the random points in the robot space look like. The hexagon at the bottom is where the robot's position. The laser tracker is where the triangle is. The blue dots are the randomized joints in its workspace. If this was a research paper, a reviewer would tell me that I'm biased towards one side of the robot. And in order to prevent overfitting, we wanna put aside a set of data. So we're gonna validate the robot's calibration with 30% of our data after optimization calibration while calibrating with 66% of our data. And the nominal model. So the initial errors, given a robot out of the box, a UR10 robot has errors on the order of five millimeters. And so this is what that looks like. So even though the robot itself, I'm pretty sure UR quotes about 250 microns repeatability. If you tell a robot to go to any arbitrary position in space, it might be off by up to five millimeters, which is actually quite dangerous for collisions when you're offline simulating your production line or trying to do surgeries because there are a lot of, since it's a low-cost robot, there's a lot of researchers trying to use it for even neurosurgery these days in research applications, not at the hospital. Don't worry. So we wanna reduce the max error below a certain threshold. And we're not using the mean error because you could still have a very high max error and your mean be really good. We want the max error to be below a certain threshold. That way, when we're offline programming, we can be very certain that all our motions will be below the threshold. And so first step, we do a quick tool calibration. So I move the robot to some very basic joints just using the wrist of the robot to very much isolate the tool transform. And PyBotics introduces a concept I came up with called the optimization mask. So using SciPy and the optimization libraries, you have to send in an NDRA of just the parameters you wanna optimize. But the robot could have up to hundreds of parameters that are composed of the kinematic chain, the tool transform, the world transform, and any other transforms they have along your forward kinematics. So the mask allows us to retain all those parameters but only extract the ones we need when performing the calibration. And so this is pretty simple. We just have a world frame. So we have the six dimensions of a world frame, the kin magic chain in MDH parameters. There are four parameters per link. There are six links on this robot. So we have 24 parameters for the robot. And the tool is just another four by four transform. So we have six parameters right there. Time to calibrate. We have our fitness function from before. We have our tool calibration robot model. The joints required for the tool, the positions recorded from the laser tracker. And we do a calibration. This one is pretty quick. And let's see the difference. So just from the mechanical CAD data, which is usually pretty accurate, but apparently I didn't mount and screw things as accurate as I thought. We have a couple of millimeters of air right there in our tool transform. So that's step one. Step two is do the actual robot. Same idea. We're optimizing just the robot now. We closed the optimization mask to the world frame and to the tool frame. We're masking just the kinematic chain. You'll notice there's a couple of falses in here. There are redundant parameters in the robot way outside the scope of this presentation. Max function evaluations was put to really low 50 for the sake of computation time in this presentation. That's what the warning is for. And we have our new kinematic model. So the diff between the old model and new model is just little microns of air sparse throughout the robot model. And in the end, that gives us this kind of calibration. The blue is the robot out of the box. Now all our errors are below one millimeter. And this is from five minutes with a 50 max FEV of sci-pi. I've run simulations that have spanned days and we could go much further. And there comes a point where you hit the asymptote of optimization. But generally speaking, the more points you have, the better off you are. And so just to look at things a little closer, I believe the max error, where was that? Wherever it went, here we go. Max error is now 0.6 millimeters with a mean of 250 microns. So we turned our $50,000 robot into something that could be used for human surgery. Back to the fun stuff. So why don't manufacturers calibrate their robots? Well, it'd be expensive. Every single robot would need its own individual calibration and most of the time you get the most bang for your buck just by really well measuring your world frame and your tool frame, but doing the kinematic calibration also helps. And we see from this that NumPy, sci-pi, the scientific Python community, they're fundamental building blocks for not only the Python developers and programmers, but for people like me who came from pure engineering and science and it's because of these packages that I was able to really integrate into this wonderful ecosystem. Robots aren't very accurate out of the box, but as we saw with 15 minutes of time and a fun little robot modeling toolbox, we could fix that. So if you love robots just as much as I do, what's the future of robots for you? Well, I believe that there will come a day for hardware continuous integration. One of the talks I proposed at PyCon would have been a whole mechanical engineering Python integration thing and I've built these packages that try and do continuous integration with like very much GitLab style, but with CAD files. It's a whole mess because that area of mechanical engineering, it's just a monopoly of vendors, real pain and everything's on Windows. You could contribute to robots, devices, make some wrappers for all the hardware that's out there, make some serial port communicators, sensors, frameworks and toolboxes. There are very little standards these days and we need more open source standards. Like just looking at Protobuf, it's been a godsend for robotics. We're able to through ROS, through different manufacturers, we're able to standardize robot protocol communication, really big help, and you could do like me, get a graduate degree, going to make a plug for my lab at ETS, the Control and Robots Laboratory. We have about one of every robot in existence. We have less grad students than that, so the ratio is prime for having some fun with some really expensive toys. And so, thank you very much and this is where you can find me. Alright, so I'm apparently going to repeat all the questions if you have any questions for me so that they in back could get it. English or French, I'm good with both. Don't worry about it. Yes and no. It's funny because the potential is way out there for doing anything with robotics and then you watch grad students program robots and you realize they'll break themselves before they get very far. We were doing the Amazon Pick Challenge in our lab and the amount of times I heard the robot just crash into the table and it's trivial stuff, like picking up packs of gum. You think it's trivial, it's not trivial. It's not trivial. So one day, yeah, but I don't know what's first, AI or robots? We'll find out. So if you, so question is, if you move the robot, do you need to recalibrate? Depends. The kinematic chain should stay constant if you calibrate it properly. If you drop the robot, then no. The tool, because remember, we're talking about microns here and there and if you think the robot is just one long radius, the smallest little deviation at the front or the back over this robot's 800 millimeters, so almost a meter long, that could create millimeters to even centimeters I've seen of air at the very end of the kinematic chain. So rule of thumb, if you unscrew something, do a double check for the surgical rope. So there's some stuff on embargo right now. I'm presenting at Neuroscience, the big Neuroscience conference in Washington next week. So at PyCon, I might be able to present a little more on the neurosurgical robots, but essentially we calibrate every single session. So yeah, yeah, yeah. Well, okay, so the question is, given the kinematic models of a robot, how do you know it's right? And how do you know what the model is? So getting the model is pretty easy. I'm gonna use the right hand rule of trimacer transformations. You have your alpha, A, theta, D transforms and this, yes, there are two parameters for those paying attention. There are two parameters missing from that to make it a proper four by four transform, but there are certain assumptions to this that your chains in between are parallel. There are models that will take that into account and that's even higher levels of calibration, but essentially given, like for instance, this robot, we'll start at the base, we'll have our first joint, we'll go up and do a rotation, we'll have our second joint, we'll go up and do another rotation, third joint, third and you'll go along the kinematic chain. So that part's pretty straightforward as long as you follow the step-by-step procedure. Craig's textbook, absolutely amazing, explains that with nice diagrams. How do you know it's right? You need your manufacturer to give you the measurements or you spend a whole lot of time trying to figure out the measurements and a lot of manufacturers, they'll usually be nice enough to give you the kinematic chain, they definitely won't give you the dynamics and so they won't tell you where the center of mass is for each link, they won't tell you the friction and so what we did here is technically a level two calibration, just doing the kinematic calibration. Not so much found in industry unless you're in aerospace and whatnot, but in academia, we often do a level three calibration where we take temperature and count, the friction of the joints, gravity, if the moon is out, all these things, just to get that, to go from 250 microns accuracy down to those tens and that's when you need a whole ton of points and a whole ton of measurements. Okay, so where it's coming from? Well, if you come to a point in this way, you're going to be right up there. Yeah, so. So you're going to be on the moon. Ah, ah. So the question was, how do we know that the error isn't dependent on the previous position of the robot? So one important thing, like I skipped over a lot, randomized measurements. So it's a bit of a Monte Carlo model where you're just throwing random measures out there so hopefully you disperse that into the noise of the system. For sure, where you're coming from is very important because you might have hysteresis and backlash in your joints, which is actually funny because ISO 9283, I believe it is, defines robot accuracy as going to the same five points from the same direction only 30 times. So, yeah, it's the last time that was revised, well, the last revision that was accepted as a change to the standard was 1998. And I think a couple of years ago, they deemed it as still up to date and so they haven't updated it. So push your standards committee maybe for randomized methods. But it's also hard, if you say randomized, then manufacturers will use randomized to their advantage a lot of times. So we might not get some, it's nice to have a standard where you know exactly where those five points were, but it's also more mathematically correct to use a randomized method. All right, not yet. And I've been, I've been spoiled. So the question was, have I ever tried calibration on the cheap robots? I've been spoiled, that's for sure, with having access to these overly expensive industrial robots. I would imagine that it would improve the accuracy, but as I said before, the biggest constraint with the cheaper robots is the stiffness. And even though that is a level three error with it's a non-linear error where your error will then depend on where you are in space as well. And that would require a whole other level of calibration to get very stiff. I saw this one as a white robot on Kickstarter that in the, their little cute video that they have to try and sell you the stuff, they were saying how accurate it was and they had like this pencil looking thing that the robot would go to and in the video is off by like three centimeters. But they just, they just skip through that portion of the video so quickly that it was like, oh yeah, black magic. So it's stiffness, step one, stiff robot, step two, calibrate. So the question, the question is wouldn't it be easier to use machine vision? So yeah, visual servoing is the term for that. And it is very easy and very difficult at the same time because now you run into camera calibration issues and visibility of your tool. Robotsik, a Quebec company, actually a wonderful company, co-founder is one of my jury members for my PhD. They just added a wrist mounted camera for their gripper system. And so you're able to do visual servoing right from the wrist as you're gripping. That's very good. The biggest problem constraint I often find when I do machine vision applications, especially in a visible light spectrum, is your lighting conditions. And in a factory you might have like flood lights and you could really standardize that. When I do medical applications, every OR is a different lighting condition and it drives me crazy. So if you can use the infrared spectrum so you just blast infrared light, no one sees it, no one's hurt by it and you just use reflective markers or whatnot. But visual servoing, it's yes and no. It depends on how it's set up. We have at the lab, the Finook stereo vision system mounted on this big 80-20 frame. Took about half a day to calibrate. Absolutely wonderful. Only for this area. And so when you have a robot that has a reach of almost a meter, let's say, you want to try and maximize as much workspace. So either you have to go beyond two cameras, then you have to start stereo calibrating, stereo matching between your multiple cameras. The complexity rises pretty quickly. And if you want to do machine vision, open CV, absolutely wonderful, used in the industry a lot. Contribute back to those packages. All right, well thank you very much.