 Hi, today I'll be talking about our CAFSAWN Project Parrot. Parrot stands for Parallel Asynchronous Robots, Robustly Organizing Trucks. The goal of the project is to basically automate a situation like Amazon's warehouses, where people might be currently running around a warehouse, picking up packages from different parts of the warehouse to package it all into one box to be shipped out to you. The goal of the project is to use swarm robots and multiple small worker robots to pick up pallets distributed around a warehouse, pick them up, and then organize them in a predetermined fashion. So we'll be using swarm robots, in this case three robots, and our goal is to have the three robots speed up our task by two times versus just having one robot run that task manually. The other goals of the project are for each robot to have four hours of battery runtime and for them not to collide with each other. The computer vision system helps us localize different objects in our field. The computer vision basically figures out the positions of the robots, the pallets, and the goals. And feeds it to the planner system, which figures out an appropriate path for the robots to take to get to the pallets and then eventually get to the goals. The original design with the localization system was to use these five NeoPixel LEDs uniquely arranged on the robots to run an inverse affine transformation and figure out the positions on the field. This turned out to be very robust due to varying lighting changes and the NeoPixels often overexposing the image captured by the camera. Therefore, we decided to switch to a simpler Ruko marker setup, often used by companies like Boston Dynamics and Google Wing. These markers effectively allow us to determine where our field is located in the camera's field of view and also determine where the robots are within the field. These markers are so sensitive that they tell us the X, Y, and Theta positions of the robots within about 0.8 millimeters to ground truth. Our sandbox consists of a 2 by 3 meter foam field that's going to be seen here. We have positions marked with markers for goals and we have small pallets made out of various materials that the robots can pick up. We have a singular camera that is overhead that tracks the positions of the robots and pallets and the goals. These positions are then fed into the task and motion planners, which assign robots to various pallet jobs they have to move around. As well as determine the exact path that the robots have to follow. Finally, we have a controller that is used to make sure that the robots execute the paths that are required in order for them to get to the pallets, pick them up, and then drop them off at their final destination goal points. Our robot planner is divided into two main systems. The first is the task planner, which is responsible for assigning the robots to various pallet drop-off jobs. And the second is the motion planner, which is responsible for figuring out the exact path the robot needs to take to get from the start point to its pallet and to its goal. Both these planners are executed before the robots start moving. We pre-compute all of the paths. This is just to reduce the computational complexity during the actual robot movement and reduce the latency on our system. Once we have the tasks given by the task planner, we then decide to plan each robot's actual motion. The main goal for this step is to make sure that the robots don't collide with each other in space. We do this by first discretizing our continuous state space into a grid of X, Y, and Thetas, and then we run a graph search on this grid given the robot start position and its goal position to try and find the path that connects the two in the most optimal way. The difference between this planner and most other planners is that instead of just planning in X, Y, and Theta, we're also planning in the time dimension. This is so that we can allow two robot paths to intersect with each other in position without actually intersecting in time so that two robots can be in the same place on the grid as long as they're not there at the same time. A lot of this work is done by our collision checker, which when we run a collision checker we're not only checking against static obstacles in the map like pallets, but we're also checking against the other dynamic obstacles like the other robots. And we ensure that two robots aren't in the same place at the same time during this step. And this is all that to ensure that our paths are collision free and the robots get from one place to another safely. Now that we have the paths for each robot, the next step is actually sending the commands to the robots to get them to follow those paths. Since the planner works in a discrete X, Y, Theta, and time space, it outputs paths like these with angles at 45 degrees. However, since the controller is always running, those paths need to be converted into a continuous domain. We do this by taking a cubic hermite spine interpolation and then running our feedback and feedforward controllers to make sure that the robots are where they should be at each time step. The feedback controller works by trying to correct the error from where the robot currently is to where it should be. And then the feedforward controller works by trying to make sure that the robots are moving according to how they should be if they're going to make the next time step. This is just an exaggerated version of what the feedback controller does. Whenever the robot slips or loses traction, the feedback controller provides extra force to move the robot quickly to where it needs to be in order to make sure that the robots meet the waypoints at the exact time that the planner specifies. The robots also shake their heads in disapproval in response to being poked at. So these are our robots. They are roughly four inches by four inches and they have a custom PCB that not only provides the structure of our robot but also an electrical interconnect between all of the electrical components on a robot. We have batteries, 18650s that are providing all of the power for our robots along with a voltage regulator. Then we have a node MCU that is doing all of the brains behind of our computation. And we have continuous rotation servos to propel the robot as well as an electromagnet to pick up these ferrous pallets. These robots also have a 3D printed support that allows them to have three points of contact with the ground as well as weight in the back to make sure that they don't tip over when we pick up the pallets. They also have a runtime of roughly four hours and have a timeout so that if they don't receive any commands from the main computer, they shut down and they'll move. Here's what parallelization in our system looks like. As you can see, when there's only one robot in the field with three pallets, it takes nine minutes to complete the task. When there are two robots in the field, it only takes five minutes since two of the pallets can be picked up and dropped off at the same time. Of course, one of the robots has to come back and pick up the third pallet, which slows it down a little further. Lastly, with three robots, you can see that all three robots are able to pick up their pallets and drop them off at the same time in parallel, speeding up the system to only three minutes. In this series of experiments, we kept the number of pallets and their locations the same to only control for the number of robots in this environment to show what speedup kind of looks like. In addition to hardware parallelization, we also software parallelized our system to avoid any slowdowns when you added more robots to the system and keep our sense plan act loop the same. This was done via separate threads for capturing an image and processing it, another thread for communicating with the robots from the main computer and another thread for actually planning the controllers next tasks, like what speech are the robots run at.