 And there is really no need to have a general-purpose process on these. And so what other activity to offer in this project I've come up with is an accelerative-based architecture, which is efficient in energy, weight, and functionality for, especially in a real system like this. And finally, the third one is innovations in manufacturing. So previously, when the project started, the way we would build one of these is you would laser cut the parts in the lab and Rob Wood, who's the PI of the project, would go get under the microscope and hand-assemble these. And that would take him half a day, and that's clearly not scalable. So one of his graduate students, Pratik, came up with this technological pop-up of MEMS manufacturing, which is now being popularized. He actually had a startup on these lines that enables mass production of these micro-mechanical MEMS items such as these. And here you can see, basically the idea here is you shift the complexity to design. So at design space, you make up these complicated categories that you can quickly assemble, and here you can see it gets assembled, and you can release, and you get an actual VM. So they can be produced in mass. And the only point I want to make here is that these are now available. So let's look at what you can do with a swarm of micro-vehicles such as these. So why use these? And I'll suggest four examples. The first one is crop pollination. And this is a good example, highlights two of the main advantages of such swarms. The first one is micro-multipleation. If you want to pollinate flowers, you want something small and the size of the step stack. The second one is massively parallel operation. So if you want to populate a large, large form, you don't want... It's hard to use five or ten per button. You want large swarms of these. So maybe swarms such as these can be used for crop pollination. You can think of cohort operations such as surveillance. Obviously, for the size, you can use these to do other sorts of cohort operations. Sending them into hazardous areas. So instead of first responders sending such micro-vehicle swarms in the case of an earthquake or a fire, and at least letting them map out what is going on inside before you actually send the first responders another application. And finally, tracking dynamic phenomena such as a chemical blue oil spill. So because these are mobile, you can make them look like the source or track the boundary and use these swarms. So pollination of the flowers, for example. So you have flowers that are very small and being able to do pollination at that scale, it's harder to do with larger aerial vehicles. I mean, think of it as a futuristic. And I can actually give you the details. The reason why particularly the robes are suited for this is the aerial dynamics. So again, there are examples from nature where the kinds of maneuvers that honeybees can do, it's very hard to do with other... So flapping wings are particularly suited for that. And again, we are gradually by being other people in the project are demonstrating those sorts of maneuvers which are harder to do with regular rotary aerial vehicles. But yeah, it's one of the future calculations. So building such systems have many challenges and I'm going to highlight three of them. The first challenge is energy. Like I mentioned, these aerial vehicles carry the battery that they are fueled off of. Here is the projected power budget of Robo-B. And as you can see, 91% is consumed by the power actuator and 6% by the control actuator. So actuation takes 97% of the total energy budget. So what this means is there is about 3% left for the sensing and computing on it. So what you can do are very simple things. These are the...this is the numbers from E-flat and 6 to L-Cops we use for our S-Pad. Again, the processor consumes around 11 mW, the radio consumes around 50, and the activation consumes somewhere between 1800 and 2700 mW. So it's an order of magnitude higher and so activation dominates the energy. And even with these numbers, the expected flight times are around 10 minutes. Yeah, the only point here is basically sensing and computing takes is a very small portion. The second challenge is coordination. If you have hundreds of these micro-vehicles, programming or tasking each of these is going to be hard. And so you want to think of these swarms as one unit and not addressed individually. And reasoning about any sort of inter-MAV communication is again somewhat harder. And this only gets exacerbated by the fact that they can really do simple things. And finally, there is inherent uncertainty in the deployment. If you have hundreds of these, then you can potentially imagine individual errors in low-parallel sensing and actuation. There might be individual failures. And finally, when they go out in the real world, there are dynamic conditions such as wind etc., which drive the deployment. And building such a micro-vehicle swap is inherently interdisciplinary. So I think of it as being from five sort of private science areas and these are the following. So optimizing the systems that actually drive these, looking at wireless networking and how mobility impacts wireless networking. What sorts of sensors, so perception means what sorts of sensing goes on such a vehicle, given the weight and energy budgets. And given the sensing, what sort of estimation control algorithms go on them. And finally, if you have many of these, what coordination mechanisms go on and so forth. What I'm going to talk mostly about today is the coordination aspect and a system we built to coordinate such micro-vehicle swamps. I'll come back to this picture and tell you what about some of my other contributions in these other areas during my guide. So for coordinating micro-vehicle swamps, we built this system called Karma. It's a framework to program and coordinate such swamps. It alleviates each of the challenges I mentioned before. It provides a simplified programming model that lets you break down complicated applications into simple things that these LEDs can do. It provides a centralized coordination mechanism. And finally, it gives you graceful degradation to sort of the environmental conditions and inherent uncertainty that exists. And throughout my talk, I use crop pollination as a driving application. And roughly as follows, if you think of this as the world, the application is that the MAVs need to go out and look for where the flowers are in bloom, which is roughly that area. And then they need to go to the task of pollination. So the first observation that I would make is that, like I mentioned throughout, the flight times of these MAVs is very limited. And so the mode of operation typically is they go out to the world, do something, come back, recharge, go out, do something again. So our first idea is that you can use this central place where you recharge as a place to coordinate. And this is something we call the HydroModel. And the HydroModel as follows, there's a central height denoted by the spring machine. It acts as the coordinator and it can recharge the LEDs. The LEDs are simple drones. On every round trip, they do a simple task, come back, tell the height what it did. And the height then uses that for future requirements. So to adjust to future requirements. So here's a cartoon execution. The room goes out, does a simple task, does some sensing, comes back, tells the height what it found. And this goes on and on. And as time progresses, the height gets a better idea of what's going on in the world. And it uses that information to get a few of these tasks. The second thing I find is the programming model. And this programming model at some level is inspired by the cluster computing models proposed these days. So like I said, we have a task called that is a cognitive problem. And the idea is to break the task down into simpler tasks, or break the application down into simpler tasks called behaviors. So in this case, like I mentioned initially, there's a search behavior and there's a parliament behavior. So I want to go search where the flowers are in blue and then I want to go parliament in those areas. Breaking this down has two advantages. The first one is it breaks an application down into simpler things that the LEDs can actually execute. And secondly, it actually exhibits the inherent parallelism that exists in the application. So because I have a large form, I can actually execute many things in power. And so you can think of a behavior as something that an MAV executes in a round trip. So it does a simple action. So it does a simple random walk and a sensing or an activation task and does this repeatedly. So that's what the behavior is. And the second characteristic is that these behaviors produce information on execution. So for example, if I'm searching for a flower, it says other flowers here and other flowers here. And so if it finds them, it comes back to the higher that I did find or not. And like I mentioned earlier, the high uses this information to drive future deployments. So let's look at how you can compose an application in this world. So again, here's our world, here are a few flowers in blue. And the first task we have is a search task. And this gives out information called flowers. And you have a second here which is called followed. It says if you find flowers, do this. So what this ends up being is an indirect way of wiring behaviors into a control flow. And this is sort of the key idea to... And what this does is you can actually have arbitrarily complicated control flow graphs that have these simple behaviors as components. And I'll show you other applications we've built other than collaboration. But you can have... You can build, like I said here, you can build arbitrarily complicated applications with this simple sort of programming application. And the third key idea in our system is, like I mentioned, we want to use these MAB swarms to work in large areas. And so we need a way to reason about space. Again, we have our world and the way we reason about space is basically by breaking the world down. So think of Thailand. Here I've broken them down into hexagons and there are seven regions. But the key idea is the way these regions are broken down is you need an external localization mechanism that can identify where these bees are. So you break the world into regions and the only requirement is that you be able to tell the MAB that currently you are in a certain region. So you can break this room down into four pieces and say region 1, 2, 3, 4. And the only requirement from our system is that you be able to tell the MAB will query where a line just needs to say you are in region X. So as long as you can provide a core localization mechanism, you can reason about space in the following way. And what this does is the following. So the problem of coordination now becomes how many MABs do I deploy in region X to execute a behavior Y? Does that make sense? I'll let you. So yeah, the key result of tiling the world into these regions is that you turn the coordination problem into a scheduling problem at the height. And yeah. So you have one height per time or one height per day? No, you have one height for the whole system. So there's only one height when I'm not setting multiple heights. And yeah. So there's no computer vision from our recognition task? There isn't. So like I mentioned earlier, so all I'm thinking about at this point is how do I coordinate these things? Some of the perception it's on is dumbed down. And I'll come back towards the end and tell you how we can incorporate some of these. So at this point, I'm only worried about I have a task and how do I break it down to execute it in the world? The individual pieces. So for example, how am I sensing the flower or how am I detecting something is something that we are basically abstracting out. So let's look at, like I mentioned, it started with a scheduling problem. Let's look at how scheduling works in Karman. The problem is basically how do you allocate sorties or these MAVs to behavior region pairs. I have these behaviors that I want to execute and I have these regions where I want to execute them. So how do I allocate the MAVs I have to the behavior region pairs? Yeah. You were telling me that this program was... So it's basically inspired by this, but there are very stark differences. For example, what you operate in MAV news is a single file of data. And there's really no geometric correlation there. So it's basically inspired from ideas. I mentioned earlier it's basically like cluster computing models inspired from. The programming model at least is inspired from there. And how do you adapt that to? And so, again, scheduling is a large area. All we do, our policy is basically how do we minimize the total application time? Why being fair to all the active behaviors? So if I have two active behaviors, I want to make equal progress for each behavior while minimizing the total application. That's basically the scheduling policy. So we're not going to sort of the technical details. Let me walk you through how an execution in cargo works. We have our program, which I mentioned initially. There is the search. If I find flowers, then I'll plant. Here is our world. And for this toy example, the hive has, let's assume, five MAVs. Initially, it has two MAVs. We pile the world into seven regions. And the hive has no knowledge of what's going on in the world. Because the search behavior is not dependent on any other information, you can start out with the search behavior as the only active behavior. And it's not been executed in any of them. So there are seven search behaviors to be executed in each of the regions from one through seven. Because I have five MAVs, I deploy them to the five regions I can. They search there. They come back and tell the hive that there's actually flowers in regions one and two. And we still have two search behaviors in regions six and seven remaining. But we also added a pollinate behavior in regions one and two because we found flowers. And those are added to the queue as well. Now, again, we deployed two for the pollinate behavior, one each in regions one and two, and the remaining search behavior in regions six and seven. They come back. Let's assume for discussion that they completed a third of the pollinate task in regions one and two. So we deployed one MAV that completed a third of the task. So Tarno now reasons that one MAV does a third of the task, so I need to deploy two to complete the third of the task that's remaining, which is what it does. And they come back and they complete the pollinate behavior. So hopefully you get a rough idea of how Tarno is able to reason and the only metric it uses to reason is how much work is being done by an MAV in a region part of the kingdom. And yeah. How do you estimate that? So how we estimate that is by the first round trip. So we don't have any idea before you have, you send them out and they come back and that's the first round trip. No, I understand how you can extrapolate. I'm just saying how do you do it in the first term? I mean that's the idea goes on that says I pollinated a flower and I don't know if that's a third or a tenth or a hundredth of my task. Right. So that brings me to something that I did not mention, which is as the applicant programmer needs to quantify the behavior and I'll come to the specification. But basically you need to tell us not just pollinate this region, but I need a certain number. So it needs to be quantified for us to be able to reason about it. So for example, if you want to search a given region to achieve coverage, say I want five hundred sense of readings. Say I sampled it five hundred times and I found n flowers. And that gives us... So this is an output of the search base. The number of flowers that are available to pollinate. Yes. So for the pollinated behavior it's real from how many flowers I've found. That tells me a rough estimate of how much pollination has to be done. But, sorry. Yeah. Those describing problems, they are not unique to this application. We're not even new. Those are traditional describing problems, right? Sure. I mean, I'm not claiming... But here with the high, they have a high computing power so we can do complicated things like that. That's part of your... I mean, so, when we start this plan, we start with the assumption that these can hardly do anything. And so we want to say, how can we do anything with a swarm of these MABs, right? So, yeah, I mean, so I think that's an interesting assumption, right? So I know, I'm sure you guys are familiar with the sort of the bio amount of stuff, right? So, part of the challenge with this particular micro-AV thing has always been power, right? Yeah, it's really that. I mean, I showed you the initial project. From that budget, like, 15 milliwatts is what is allocated to computing instances. Right. But let's say you have... So I think that that's driven a particular set of assumptions about how the system is built, right? So let's say instead of that, I have, you know, one of these actual insects that I've jammed, you know, electrons into, and I can steer this thing around for days, right? So now it seems like that challenges this entire model because the amount that insects do in the field is much, much bigger, right? And so this model of aggregating information at the high seems to start to break down, right? So have you guys done any analysis just looking at, you know, the trade-off there as you start to increase the amount that the bee can do, right, in a single story? So I don't have high numbers, but we have spent some time taking more, and I'll sort of come back to... So ideally, I've had, these gradually-deployed pieces of the system we built which is central in this form. So how do you do these things in a distributed fashion, is where we are going now. And I sort of come back to some of the current work, and this goes back to some question that came here, which is what is the perception, what is the control, which are also questions that have skipped, but bringing them back to build this system, is out of the current work and future work. Yeah. Why do you need to have the radio on land of any of these infected? We do not. At some level, we don't assume that they have radios at all. They just come back, and when they land at the high, is when they tell the information, pass the information to the high. In your first place, you had a budget for a budget Yes, that's just to compare. For these sorts of embedded systems, people always, traditionally in sensor networks, there's this idea that the computing takes little energy and communication is sort of the bulk of the energy. This is just to provide a comparison. There are also other reasons why you might want to have a radio. For example, I said you need a service where you say the MAV query is which region am I in, and somebody tells it which region it is in. For that, there are some practical specifications, but there is no inter-MAV communication going on that the radio is not being used for any chord. How long does it take to recharge? It takes a while. We've done some measurements. It turns out if the flight time is a certain amount, it takes twice. Again, there are a lot of characteristics. Now we have the battery characteristics and so on. But if you want to be in the safe sort of region, it takes about a little while. So in the scheduling, you kind of have a schedule. While one set of is waiting, the other set can go out and do the work, and then you can as well. Not necessarily. Again, one of the caveats of this whole model is something that you can come to. But if you have a fixed amount of work, then it doesn't matter. So, as you the example of crop pollination, at this given time, there are 100 flots that have been pollinated and that's a fixed amount of work. And there's a certain amount of work which I have to search and find. So for that, it does not matter. I can send them out in bulk. They come back, recharge in bulk and I send them out again because the amount of work I can do given the resources I have in the logistics. But if you have something that's dynamic, so for example, if I'm tracking a mobile target or something like that, then things change. And then you have to do, you have to adapt to that. That's on that. So, I hope this walkthrough gave you an idea of how our system works. Here's again a quick recap of the three key ideas. The first one is this high drone architecture where we split, we move the coordination and make these very simple. The second one is the programming model where we break the application down into behaviors. And like I said, it has two advantages. The first one is that you get pieces that can actually be executed on the MAV. And secondly, it exposes the inherent power that exists in the application. And finally, reasoning about space, we file the work into regions and this converts the problem of coordination into one of scheduling which again makes it easy to use. So, in our experiment with such MAVs, it's really hard to deploy a hundred and actually test them out and test your algorithm well. So what we wanted was to simulate these and for this, we built a simulator called CBI. I want to recognize the primary outcome of the project. The main thing we Yeah. Just a quick question. Do you guys also consider 3D? So your tile is pretty much just 2D. But if you think about like, I don't know, building or something that you might actually want to consider 3D as well. I haven't quite talked about it. I mean, not quite. It seems like you might have some impact on your programming model too. Right. Depending on how you express the region, right? Yeah, I'm not sure if it changes all that much. I can still reason about behavior region pairs. The fact that you're on the first floor or second floor doesn't necessarily matter as I'm doing it now because I don't expose any geometric correlation. So where it will matter is if the MAVs actually know region 1 is next to region 2. So if I finish in region 1 I can actually go to region 2 and continue the work that I'm doing. See what I'm saying? So the fact that these tiles exist in space and they have some geometric correlation and that can be exploited in interesting ways but at this point we are not doing any such. If we did then the third dimension actually comes into play. Otherwise I can just number them in sequence and I say go do work in region 18. It's on the second floor or on the third floor. Not as the resource convention of the Vs when they go to work together. The power is very small and it's not. Even if there's just a lot of work if you have a million Vs they can work on the same power at the same time. Yeah, this is true but the way we think of these problems is that again the work to be done is much bigger than the number of MADs. This is true for crop pollination. This is also true for if you're searching large areas or if you're doing some sort of tracking. If you had contention then again, I'm not sure these models work very well when you think of the amount of work to be done is in the same order as the number of MADs. But there's only one task to be done per MAD. It's not clear. What's the extent of reasoning about the work? It might take a long time but the space is concerning how many we have. This gets into the region of doing very precise probably longer term tasks which again where we are in terms of the MADs and think of it's somewhat harder to do. This stem from very specific assumption the application is something that is on the lines of crop pollination where if I do 95-98 questions of tasks, it's okay. If I missed a few clouds, it's okay. If I do the rough thing, it's okay. So those sorts of adapting things like tracking to that model. So how do I actually search this world for resilience in this world? So I do a lot of search in over-provision in terms of columns. So it comes from a very specific set of assumptions. Do you consider different type of bees? Some bees are like, some bees have higher values. Right. So you can think of them having different functionality. For example, some can actually detect powder, some can do other things. We don't. But actually, again, I'm not giving you all the details of the system we've built. You can do a variety of things in the system we've built. And this is something that you can incorporate. I'm not demonstrating that today. But it's something that you can update. Any other questions? All right. So to study these sorts of MAVs, we wanted a simulator that gave us the following four properties. So we obviously wanted to test scalability. We wanted to apply many, many of these and be able to test how it works. Variable fragility was, again, an interesting question, which is that we started out, you can do research, like I mentioned, in many of these areas. And what we wanted from the simulator was, suppose there was a control algorithm that you want to test, that person probably does not care about what sorts of networking goes on. He just said, hey, give me some base level. Whereas somebody who's testing needs very specific networking stack and does not care about what control behavior goes on. So being able to test different things at different levels of complexity is something that we wanted. And completeness, meaning, again, I want to be able to test every aspect of an MAV swarm. I want to be able to simulate the particular sensor I have. I want to be able to simulate the particular network protocol I have. I want to test these in actual MAVs, but directly implementing these on MAVs might be hard. So I want to be able to do that in a staged fashion. And we tested a variety of simulators, robotic simulators and napkin simulators that are out there. And we found them wanting mainly, for the robotic simulator, mainly into the scale. So things like clear stage and rough things that I've used, but scaling them beyond very hard. They don't give you the realism of the robotic simulator. So what we did was we built Simbiotic, which is a simulator to simulate migratory vehicle swarms. It's a custom simulator built in Java. It's built on top of J-Bullet, which is a 60 degree of freedom physics engine. So all the physics is simulated in the physics engine. It's extremely modular and designed. This is what gives you sort of the variable fidelity. It has many, many levels of abstraction. You can go to different levels of abstraction, depending on your need. And finally, it's easier to acquire. I'll come to this point. Here is digital visualization. Here are two videos of 50 MAVs doing random walk years of the world, years of maze. We built, we have libraries to do a variety of things. There are a variety of sensors implemented. You can do optic flow. You can have laser rangefinders. You can have an omniscient location sensor like a GPS. We implemented a radio propagation model. We can do a variety of networking experiments. Because it's built on top of the physics engine, you can get real-world conditions such as gravity and collisions. We also built models for wind. And again, we have libraries to build worlds such as mazes, mines, buildings, and so on. You can generalize these very sort of generate these very quickly. And finally, there is a provision for 3D visualization, which is what you're seeing. And there's provision for logging as you run these experiments. And finally, there's provision for repeated simulation. Many of these algorithms you might want to test repeatedly. So I want to run the same thing 100 times starting with a different random seed. They give me some sense of significance. You can do that easily. There's also about 42,000 lines of code in Java. And it's available open source. If you raise your investment and find out. You're welcome to. I mentioned it now. So if you have any questions. The final point I made was ease of deployment. So along with CBRT, we have an associated test bed. The test bed has these E-flight MCX-2 micro-arbitters. They're about 15, 20 centimeters in diameter. And you can buy them off the shelf. They're about $100. And they are tie helicopters, RC helicopters. We also have an associated test bed. It's a 25 feet by 20 feet room. Equipped with these Wicon motion capture cameras. What these Wicon motion capture cameras give us is very precise location at very high frequency. And they do that by... So what we do is we put these markers for the Wicon markers. And these are retro reflective markers. The cameras have... So they basically track these at high position. Like in millimeter position at hundreds of hertz. And so that's how we can get location. And finally, this is integrated with symbiotic. And the way we do it is basically symbiotic where is the Wicon system for the location. And it then injects that location into the simulator. Because again, we are simulating this in a physics engine. The location gets injected into the simulation. And the Wicon's inside the physics engine. This control is then passed out to... So what we did was you have these RC controllers. You basically attach the wireless component of it. And we have those connected to the computer actually. You can see it here. There are these wires hanging. Those are antennas for individual helicopters. So basically those commands get sent out to the computer to these. And then they get controlled in our test bed. So you... This has advantages of... similar to any other hardware in the loop system. I can think of flying the helicopters... They're flying in the real world, but in symbiotic, they're flying in the virtual world. You can think of attaching virtual sensors to them and so on. So they have it. One common question we had was the Wicon system was a fairly expensive piece of equipment. We had questions whether our deployment is tied to the Wicon system. For this what we did was we bought the Microsoft Kinect sensors and we used... Instead of the Wicon system, we used the Kinect to control the helicopter and we did a demonstration of this. So the setup actually does not depend on the Wicon system. It can have any other localization mechanism as long as there's something that's wrong. So Karama is implemented on top of symbiotics. Yeah. Was it very hard to put a small processor mode on? So the way I showed you now there was no... I mean, we don't have custom electronics, but I'll show you later we have our own processor and so on. We build our own boards for this. The harder part is the perception. So given the fact that we have very stringent weight constraints what perception can we develop? But as I'm showing you now these are the version of the Chef Helicopter that we do is we get the helicopter recovered by this... So Karama is implemented on top of symbiotic. Like I've described before application is a set of behaviors but for each behavior we need to specify the following three things. You need to give us a binary that is the actual behavior that is executed on the MAV. We don't have strict guidelines but basically the idea is that this does some simple actuation something like a random walk and a sensing or an actuation task. There's some simple movement and sensing and an actuation and hopefully this applied repeatedly gives you the coverage you want in a given piece. Along with this you also provide two different things and this comes back to Chef's question. The first one is an activation predicate telling us when the behavior is activated so that Karama is when to activate it and all this is based on the information collected from other behaviors. The second one is a progress function saying how much progress have you made a function that we can use to calculate the progress made in this behavior. So we've implemented another thing. There's a data store and a dispatcher. The data store basically collects all the information it's a simple key value store. The dispatcher is important because again we had to implement a bunch of dispatchers while so in our experimental setup we deploy some simulation and some in the actual test bed and so those need different dispatchers. I'll also talk to another reason why we need a dispatcher. And there need to be two services provided on the drone. The first one is the ability to store information so as it's searching the word it says store x and store y and so on. And the second one is being able to go further. At any given point it can query saying which region am I in and our system uses back your region x. So here's the video of this inaction. Let me quickly describe there's a lot of stuff but now let me describe what's going on. The central cube is the height. You can think of the world as being divided into nine regions. It's a three by three flat line. And think of a flower patch in the top area. So initially you go search the word and find a flower patch and then go follow it. What you see here is the applications for these two behaviors. The first search behavior. This shows how much progress is being made. So initially it's white because there's no progress that's been made. And as this progress made the styles get darker for the following behavior actually everything is dark because it takes there's no work to be done but as it finds the powers to work and this is the number of visual number of MAVs deployed. Again these are being shown both as tiles and as clouds. So in this simulation there are 45 simulated bees and five helicopters are applying to testbed. The testbed helicopters are shown as ovals and the simulated bees are shown as circles. So initially there's only the search tasks like I mentioned. They get deployed per search. Here you see two perspectives of the visualization. You see this view of the testbed shortly. So they go into the individual regions and they do circles. It's sort of a simple behavior denoting that they are getting the coverage you want. So that's the view from the testbed. Basically they reflect the ovals that are in the and here's the patch of flowers that you can see. So as the first bees come back to the hive that's there is a patch of flowers here. Then it starts deploying MAVs to pollinate and those are denoted by blue circles. So the yellow ones indicate that they're being searched and the blue ones indicate that they're executing the pollinate here. And as you can see here now it finds that there is a region where it has to pollinate and not sure if you can notice but the coverage is getting sort of as well. And again somebody asked me a question of how deployment works and whether we do this in tiers. Just deploy all of them and then they come back, charge and then they go out again. And this goes on for a while and it takes my word that it does the right thing. Again it takes quite so you can see this helicopter is now doing pollination so it does this by repeatedly landing on the flowers in that region. And as the work gets deployed only the right number of MAVs for the task. So we've run this on a variety of scenarios. This graph shows you the efficiency. What I'm showing here is on the X axis is the swarm size and the Y axis is the speed up in comparison to the first of the families. And I'm also comparing it with an ideal scheduler which and it's an offline idea. It's ideal because it knows how much work there is to be done. So what Karma does and this goes back to Jeff's question we do an initial deployment when the MAVs come back. That's when we know how much work is being done for MAV whereas the offline ideal scheduler knows exactly how much work will be done which is why it always comes but you see a speed up which is what you want which shows that it scales very well. We also demonstrate the factor in the labs by and this is just one of the experiments. So what we do is we simulate wind in a third of the world and because the MAVs have to fight to win they do less work once they get to that region. And so the round of times get reduced by 32% and just funny they do less of work but because we reason about how much work is being done per MAV per region what Karma does is it deploys more MAVs to that region so that it can balance the amount of work being done. The scenario takes a little longer but you make equal progress. We've also demonstrated we also distribute error in localization sensing etc and we show the paper that it actually does the right thing. So we've also implemented a variety of other applications so we implemented flue tracking, target tracking a comprehensive craft monitoring application in the system and again some of these are the paper. So one of the caveats of having a centralized system that is Karma is there is something called information latency for example so if you think of the swarm as a whole from the time the MAV goes senses something to the time when it can actually when the system can actually actuate or make a decision on that information there's a gap. The gap is it waits for the MAV to come back to the hive and that's when you can actually make the decision. So that is what we call information latency and this is a problem not for static applications like craft monitoring but for applications like target tracking. If I have a mobile target and I want to track that I have deployed MAVs that are searching for it it finds the target in the region 2 and the hive replies more MAVs to track that MAV but that target would have moved in that time. And this comes back to your question so what we did was we implemented another dispatcher and here instead of really dispatching all of them at the same time we staggered them across the whole time so the time taken for the round trip and the charging and what this does is if you assume that all of them roughly have the same lifetime they go out and come back in a staggered fashion so the information I get from a given region is fresher than what it would be if they were deployed really and this is what basically I am showing here the x-axis has different swarm sizes and the y-axis is on a large scale the information latency so that was the last time I got information from this region and that number is being averaged and as you can see they have a continuous dispatcher in this staggered fashion you actually reduce the information latency of course you don't fully reduce it to sort of fully remove the information latency what you need to do is to have a distributed version of this a second extension we did was my colleague Spring Berman works on modeling very large swarms going on to like 100,000 to millions of these and modeling them as particles and she has a whole framework called ARTRAD that not only models it but has a stochastic way of doing task allocation and so what we wanted to do was Dharma is a very deterministic way of allocating of doing task allocation which is to say here's how many MABs I need to deploy in this region so we wanted to compare that to the stochastic framework she had and what I did for this is I took her methodology and I implemented it as a scheduler in our framework and so you can execute the same application using different schedulers and you can compare them side by side which is what I did here the first thing we learned was obviously it's not quite an apples to apples comparison because the requirements that ARTRAD makes of its MABs is very minimal whereas in Karma we say go execute behavior X in region Y and you need to have some mechanism to navigate or the MABs to navigate to that region whereas ARTRAD doesn't make any of these assumptions but if you can actually satisfy these assumptions then Karma works twice as efficient the caveat being that it's very sensitive to error for the kinds of errors we simulated there's a 30-40% performance hit whereas in the stochastic we are doing a 5-6% and again so before I have a lot of time initially I said research in this area is inherently interdisciplinary let me go back to the picture and I sort of spoke to you about the coordination aspect of buildings at swarms I worked on many other aspects of these systems initially in my graduate career I worked on this embedded system that was a functional prototype of what our soldiers carry into battle which was called land warrior and what we wanted to demonstrate was given a script or a sequence of things we wanted to demonstrate power efficiency and for that the particular piece of work we did was expose voltage and frequency scaling that comes to the application and do application specific voltage and frequency scaling to give the same level of performance while getting energy efficiency a second thing I did again loosely as a graduate system is I spent a part of my thesis on studying static and mobile sensor networks and for this we built these robots called robot modes these are complimentary to the modes from Berkeley which were a popular platform for doing sensor network research and so what we did was this could program the mode and you could program the mode in tiny version just like it can actually be radio or the sensor it can now actuate it can now move around using the same API and we also built a table top test pad to accompany that I also spent a part of my thesis working on wireless networking and particularly adapting into mobility the first two pieces of work was doing for for power awareness and here what we did was the first piece of work was instead of doing shortest path routing can I do more power efficient routing and being with nodes being greedy and them not doing the forwarding when they are low on energy and the second piece of work was extending that to saying how do I maximize the network lifetime I'm not one of the individuals but how do I maximize the network lifetime I want to maximize the first node failing a third thing I spent again a lot of my PhD working on these robot networks and while experimenting with these robot networks one thing I found was because these were mobile nodes there would be a lot of route switching happening because the routing protocol did not know that it's actually running on a mobile node and so I came up with particularly I was using OLSR optimized link state routing and so I came up with these additional options to embed positional and directional cues that actually provided more stable routes on OLSR for perception again I was working with these robot networks where these robots just had radios and so we wanted to use the radio as a sensor and the work we did here was using signal strength and the fact that these were mobile to estimate course bearing so if I have 5 neighbors by just using mobility and measuring signal strength can I estimate roughly which direction my neighbor is and I know you guys have a big effort on smart phones under the phone lab umbrella in 2009 I caught out a course with my advisor on android phones and as part of this three college went on to get published the main ideas here were to exploit the sensing on these the first work was OCR drive is basically bringing optical character recognition onto these phones and the main idea here was you take pictures not necessarily cleanly can we clean them up before we send them off to an optical character the second piece was building a navigational system for something like a campus and again the innovation here was to use the phone as an interesting sensor so you could point to a building and point to a floor and say what happens there and press a button and then you give the information about that particular floor and the third one was what we call 2.5D localization so inside a building GPS does not work so can you localize the person on the floor at which he is and for this we basically fingerprint a person walking up, a person walking down and a person going up and down basically be able to detect those events and then say exactly which estimation control this work was again from the setup of using static and mobile sensor networks together what I came up with here was a distributed control law to estimate a level set and what I mean by a level set is suppose I want the an isotherm where the temperature is 70 degrees so this distributed control drives a mobile robot to that level set not only does it drive to that level set but tracks the level set and it does this by having a deployment of static sensors to detect the neighbors and secondly I used so initially I mentioned we used the radio as being a sensor and estimate course bearing I used that course bearing to improve connectivity in a robot network and finally coordination I spent a lot of time talking about coordination I also worked on coordination static sensor networks particularly interior sensor networks too much time on that this brings me to future work so here are the kind of things I'm interested in what I'm interested in is in building large scale systems that combine computing communication sensing and actuation and examples of these are sensor networks micro-vehicle swarms the kind I've been describing I can give you examples of system medical devices and I've told you how we can do coordination some current work that I'm doing is identifying the other pieces for example what sort of sensing can go on micro-vehicles such as these that we're using so we're collaborating with a startup in DC called SENTI which builds these really low-powered lightweight vision sensors so these are vision chips that give you a 16 by 16 pixel image which is not much but what you can get from that is what's called optic flow and turns out in real life these actually use this for long term navigation so using these sorts of low-power vision sensors and some inertial sensors like axonometer and gyros being able to build algorithms that can control an MAV so we've instrumented the question that you had we've instrumented the E-flight M6 to helicopters we have with custom electronics to control them and to sort of use these sensors to come with algorithms such as ego motion estimates for example how far have I traveled be able to detect features and we are particularly tailoring them for indoor applications such as building so can I detect hallways, can I detect intersections can I do and the part that I'm most excited about is actually taking this and doing the system like Karma and building a distributed system order act there are again many ideas that you can take from both robotics and sort of distributed systems and this is where this is the kind of thing that I'm most excited by in the last 20 years there's a lot of work on SLAM or simultaneous localization mapping in robotics but a lot of it is tailored for very heavy weight sensors like laser range finders and cameras fairly dense cameras the second thing is these algorithms are fairly monolithic so they are typically meant to be on a robot and I have particular ideas for example breaking these up into coming up with distributed versions of this and this is the sort of thing that I'm most excited by in the near future again all these might sound like they're particularly tailored for my favorite vehicles one but I can sort of tell you how some of these ideas map to other things like assistive vehicle devices of the kind that I'm interested in I'm almost running out of time so let's do it with acknowledgement so I'd like to thank my PhD advisor Gaurav Sukhathne from USC the advisor that I've had at Harvard Gaurav Sukhathne was my current advisor who was my advisor when he was at Harvard Rob Wood who's the PI of the ROV's project Gaurav Sukhathne had the opportunity to work with Brian and Jason and Jason is now at Swatmore postdocs have worked with Spring Birdman who's now at Azzanus State and Richard who I work with currently and the organization that I've actually been a part of the emergency systems laboratory that I got my PhD for a large part of my PhD I've worked with the Center for Network Sensing at UCLA and SSR which is a huge group I'm part of I leave the future work slide and I'm happy to take questions I can't write down what you're doing alright, thank you When you need the convenience of the dispatcher how do you specify that in the US? Yeah you can specify what dispatcher you're using when you're in front of the system Is that I'll go dispatcher? Yeah These are again very much I can choose what I can run an execution back you can say I want to use the convenience of the dispatcher because I'm sensitive to information So I missed the first time so I may be asking that question but in all of these practical applications you're talking about perceptual uncertainty is right even optical flow or detecting the flower how does your framework handle uncertainty and the actual perception? So I mean the versions it doesn't quite So for example, detecting flowers there are other pieces of the project which have addressed this but our system doesn't quite understand So for example as part of the larger project there is this particular template matching sense of that to detect things like flowers So some of these problems again, I'm not sure that quite answers the question from our system The way I see it is this is sort of in development we sort of pick pieces and solve particular pieces and then we go back to engineering that's the way I think of it I'm not sure if that's a convincing answer but that's the one Any other questions? Another question So this is like you have like a high or a swarm bay for message passing just communicating information but for example, they don't do that they have a nest but ants will drop markers locally in the environment for the communication so they can immediately know exactly where to go so what was the reason you chose this model rather than a more distributed model? So the first thing is again, building any extra structure gets hard we started with the assumption that these MAVs are very simplistic so they could potentially not have anything and so what can we do with that assumption and we're gradually relaxing these and moving on particularly in terms of doing things like which is what embedding information in the environment again, you need additional infrastructure and it's not clear if that's the best way the second reason was some of this is very analogous to what bees do so bees actually don't communicate in the field they come back in the higher and the higher is the convenience for communication so this bears fairly close resemblance to what bees do which was where we started with but like you said this area is right for a variety of such ideas including embedding things in the environment I'll follow in a distance question we've talked about this in the past because I think the model of go out and do something to come back there's no data exchange between insects during flight there's no possibility for that have you guys quantified the loss and optimality there's a simplicity that comes along but at the same time it's difficult to convince someone maybe that some of these things wouldn't help if a bee could tell another bee something as they're flying past each other or if they could communicate some information rather than having all the information flow through the hive which is expensive to get back in ports there's some reasonable assumptions try to quantify the loss, the extra what's the price you're paying in terms of latency or energy cost or the simplicity of the program model so the latency was captured by information that's sort of a direct revelation of how much I'm losing so from the time when I collected this information the decision on this information is basically information that's something we've quantified again across applications the complexity question gets harder because it's not clear it's very application blue it's not clear how much communication is actually required and again there are other pieces of work before me, so for example there's other work again done by Radhika but they demonstrate that they actually go on to quantify so there are other cases where that's just communication not a coordination question because that gets again more working but yeah it's harder to quantify if you don't tie it down to a particular application well right I mean you would expect that the solution would be somewhat application specific but if you take in a specific one and say okay let's just do a gradual solution for this particular application and see what the trade-offs are I can give it to you in terms of like if you classify applications as being fixed work in the world so for example like there's fixed amount of work there the only places where you gain is in sort of the last rounder so if you think it takes how many other MAs I have in my head 10 rounder because where there is some gain because I might finish the work in my region in half the amount and I can actually on my way back do more work in another region otherwise there's not much gain but if it's something more dynamic then that's where you have more gains again I don't know that answer but that's sort of a loose quantification of where there is gain have you ever considered to using like a mobile high because I have seen other people because obviously using mobile high you can sort of minimizing the total task delay in some of the centers right so I think this goes to there is a lot of work there's quite a bit of work on data viewing which has similar ideas where I have something that's mobile that sort of collects the information mobile sensor network we haven't done any of these but there are some the reason why we haven't done any of them is again it increases the complexity of now the MAV if it has to come back and recharge it has to think of where the high is and that potentially gets more complicated complicates the navigation to and back from the high and we didn't want to because there are you send an MAV out to a region it does something there and it has an estimate of how much energy it will take for it to come back to the high so that it can come back and recharge and again it goes so if you have a mobile high this complicates that you have to do some more estimation but still the calculation is down at this centralized high because since your coverage area is predetermined and the first position of the high may not be optimized so as your task going on you know that there might be a better position to position that down high sure I mean there is potential gains I certainly agree that we just didn't do this because I feel like that's going to complicate things even more but if you don't have continuous deployment you can just do everything in stages so you just make sure that the guys come back to the same position that's a very good point I can immediately see gains in a situation like that where I just cover the regions that are nearby and then I move to higher and then I cover regions nearby that's a good idea it's a stupid question but you guys are not doing theoretical analysis of all of these things how do you figure out what you're using for example about the I mean sure we have some analysis of how the scheduling works but I'm not going to claim that that's what Jeff is asking I suspect is going to be harder to analyze unless we tie it down I mean I always give a model of anything you can what did you do tomorrow? how much?