 Thank you all so much for coming. My name is Tori Bosch and I'm the editor of Future Tense. Future Tense is a partnership of Slate Magazine, the New America Foundation and Arizona State University and our goal is to explore emerging technologies and their implications for policy and for society. And sport for one month only. You know, when I was a kid growing up in the suburbs, I loathed soccer. I was terrible at it. I was so bad that my father offered to give me 10 cents for every time my foot came in contact with the ball. And it was a really good day if I left the game 50 cents richer. I still don't get soccer, but as with all things, there's one sure way to make me interested. Just add robots. So, since 1997, roboticists have competed in the annual RoboCup, in which several different kinds of robots, small and large, compete in the world's favorite sport. In the 17 years since it started, the robots have progressed fairly dramatically. But it's not just about having a good time. Robot soccer serves to get students interested in robotics. And it also challenges roboticists to help improve robot vision, decision making, dexterity and more. Today, on the first day of the World Cup, we're going to look forward to 2050, which is when RoboCup hopes that its winners will be able to beat the Human World Cup champions in soccer. We have with us Josh Levine, who is the executive editor at Slate and the co-host of Hang Up and Listen. And he'll be speaking with Dan Lee, who is director of the Grasp Lab at the University of Pennsylvania. Grasp here stands for General Robotics, Automation, Sensing and Perception. Dan is also a professor in the School of Engineering and Applied Science at Penn. As director of the Grasp Laboratory and co-director of the Carnegie Mellon Penn Transportation Center, his group focuses on understanding general computational principles in biological systems and on applying that knowledge to build autonomous systems. Today, we'll also see members of his robot soccer team, team Darwin, who were winners of the 2013 RoboCup. Dan's going to show off some robots, and then he's going to discuss the future of robotics and sports with Josh. And then afterwards, please stick around to watch the opening World Cup game. Welcome. Alright, can everyone hear me? Okay. Okay, I just wanted to start today off by introducing some of the technology that's going on behind the scenes here. And then we'll see a demonstration here with some of my students. We have Karen, Chris, Junda, and John Cho. And they're going to help me demonstrate some of the abilities of these robots. And the question is really, are we going to be good enough to really be able to play a nice game of soccer? That's the central question that we're looking at. And in particular, what's needed to kind of build intelligence into these robots to play soccer? And I think I wanted to start off by looking at intelligence and machines in general. So how many people recognize this guy here? So Watson was a machine that could actually beat humans at jeopardy. So the question is, why is it that we have machines that can beat us in chess, that can beat us in jeopardy, but we can still kick their butts in a game of soccer? This is the central question I want to look at. And if you look at Watson, it basically had kind of a nice sophisticated search engine inside of it, but it kind of cheated in several ways. It didn't have to actually read the questions off the screen in the jeopardy match. It didn't actually have to physically have a hand hit a buzzer to answer into the question. And it didn't actually listen to Alex Trebek when he was reading the questions. It was all internally done in terms of text messages, essentially, and then have an internal search. So the question is, what makes it so hard to embody intelligence into the physical world? This is the central question of robotics. How can we get the sensing and the actuation involved with the intelligence of these machines? And so kind of the prototype of this is in the movies. So if you remember Arnold Schwarzenegger playing the Terminator, this is kind of our vision of evil machines and being able to do all sorts of bad things in the world. And the question is, how close to this are we with being able to play games like this or to do some of this technology? And not only the Terminator, but in terms of Hollywood, this is actually in terms of research in robotics. When you talk to people on the street, their main competition is not actually other research groups. It's actually the impression that Hollywood gives that this stuff is so easy to do. You have all these robots. You have data from Star Trek, Wally, Hal from 2001. This should be a historical robot at this point. Something that could actually read lips. You have a whole history of these robots over time. And the question is what's involved in getting these things to really perform some of the tasks that we envision them to do. So here we have myself and students from the Grasp Laboratory. To do robotics is a very interdisciplinary effort. You need to have expertise in many different areas, kind of the mechanics, the design, the electronics, the computation, thinking about the biological implications, the medical side of robotics. There's a lot of different aspects that you need to kind of bring together to be successful in this type of endeavor. And in terms of intelligence, then, what I want to show you is the way we break down the problem in robotics is we take intelligence and break it down into three different areas. One area we think about in terms of perception, that is how do we use our sensors, right? Our eyes, our nose, our ears to kind of take in information about the world around us. Then we have what we call planar cognition. From this information that we take in from our sensors, we need to build up a representation of the world around us, right? A model of the world. And then we need to make decisions. That is, should I go over here or should I go over there? And then once you make that decision, you have to then plan out how you're going to do that, right? The trajectory you're going to take to get to that location. And then finally, you have to actually act on those decisions, right? How do I get my motors, my muscles to actually affect my motion through the world to get me to the place that I want to go? And so, you know, these three areas, the perception, the planning, the actuation, these are all key. If you are missing any one of these, the whole system fails. And this is what makes it so difficult is that all these three different parts have to work in synchrony and in perfection, right? So this is kind of, as I say, the difference between nerds who like sports and jocks, right? The nerds might be able to do the perception part and the planning part, but maybe not the actuation and the motor control. Whereas if you really want to have everything, this is what defines a really kind of elite athlete in a sport, is that they can see the field, they know what to do, they can anticipate the motions, and then they actually have the kind of motor abilities to actually do that faster than other athletes do, right? So this is really the key. And so I want to show you now some examples of that. And when we started with these robots, we started in a lab with what we're known as four-legged robots, using Sony eyeballs as the robots. So on the perception side, this is kind of what we're talking about. Hey, over here. Hello, over here. Right? So this is actually perception using your ears. And this is the same thing you could do. If you close your eyes and I start talking and moving around the room, you can still point out where I'm talking from. And that's because your brain is taking in information coming in from both your left and right ears, integrating that, figuring out actually, in this case, the time difference of a rival between the signal, between your left and right ears, doing then trigonometry in your brain to figure out what angle it's coming from, and then being able to focus in on where that sound is coming from. And this is actually in the research community is known as the cocktail party problem, because when you go to a cocktail party, you're listening to a whole bunch of different conversations, and you're trying to focus in on what direction they're coming from. And that's on the perception side. Then you have the motor control side, which is actually to get the robots to walk. So in this case, with four legs, we have something called a trot gate here, where the robot is kind of moving pairs of legs, opposite pairs of legs simultaneously. You can turn, it can sidestep, and then it can actually do, in this case, some more complicated behaviors. So this is going from a backwards flip to actually then a crab walk upside down. And then, so these are some of the motions that the pen students came up with, and then they were able to also then do a flip from this position, and then to roll over in this way, right? Yeah, doing a flop there, right, on a penalty, fake penalty. So that's the idea now, is now you've got to combine this, perception and the motion. And what we started with, this is now almost 11 years ago, we put together a team of these robot soccer playing dogs, and this is what it looked like 11 years ago. So this is the championship match from this soccer competition. This is in Italy, so the play-by-play is the time of the Italian. But you see here, the pen teams on the right, we're playing actually a university in New South Wales from Australia. You see the kickoff, these are autonomous robots. They're taking in all their senses, trying to figure out what to do. You see here, the Australian team, they actually don't play that much soccer down under. So their strategy is more like rugby. They're going to get the ball all the way down the field. They're pushing the ball, you see our defender comes to the ball, the Australian four gets the ball past our defense, our defender gets spun around. The Australian four gets the ball, he's now bringing the ball towards our goalie. Our goalie is a little too far back in the goal at this point, but he's going to come out. Our goalie makes the save and then he shoots it down the field. There's actually no offsides in this league. So now you see our attacker, he picks up the ball, he has a special kick. In this case he has a special sideways kick and has enough English on the ball that it goes all the way into the net. So that's what the goal looks like. So the interesting thing is when we started with robot soccer, it looked more like five-year-old soccer. If you remember five-year-old soccer, a lot of times the ball would go in the corner, all the robots would kind of scamper after it, except for a couple of robots or in five-year-old soccer they're picking daisies on the side of the field, sometimes the robots do the same thing. And then the robot gets the ball, they don't know which direction the goal is, so they just kind of randomly kick it, it goes to the other corner and everyone goes after the ball again, right? And this goes on ad nauseam. So that's kind of what robot soccer looked like when we started with this 12, 15 years ago. And now you saw the four-legged robot soccer playing dogs and now they've evolved to what we call humanoid robots. That is, now we have robots that instead of having four legs, they have two legs just like humans do. Humans are one of the only species that actually can walk upright on two legs and the reason for this is it's very complicated to walk on two legs. When you're walking on two feet, you're basically falling over every second of your life. Your brain is continually monitoring information coming in from your muscles, from proprioception, from the bottom of your feet, and from your vestibular system to figure out how to keep your balance at all times. And this is now when we do two-legged soccer, we have to figure out how to control the motors to walk and to kick just like humans do. So this is kind of what we'll show you in a sec is some of the soccer stuff. So this is the nails, we have some nails here. We also have these Darwin's, I'll just show a sequence what the soccer matches look like. This is actually the small Darwin's against a larger Japanese team. This is the championship match from a few years ago. You see them bumping into each other, so balance is key here, right? How do you keep your balance? It's hard to do, right? And you see them kind of still falling over. You see our robot making this kick. You see the Japanese robot make the diving save, right? And then the Japanese robot now makes a save, but then he gets up and he can't see the ball anymore. It's right behind them, Japanese goalie's lost, and that gives us an opportunity to just kick the ball in the net, right? So that's what it looks like with the two-legged robots, okay? And so you'll see that in a little bit in terms of demonstration of these robots on the stage, but this is not just fun and games, right? There's a lot of technology here that's relevant for actually our daily lives. In terms of understanding how we walk, there's a lot of medical implications, right? When we understand Parkinson's disease, those kinds of effects in terms of how our sensing system is integrated with our brain, with our motor control centers, and how we walk, how we keep our balance, as I showed you, it's a very difficult problem. So we have some research in trying to have the robots learn how to balance. So, you know, if they mess up, we hit them a little bit, and you see that these robots have to figure out how to kind of recover from these external pushes, right? So this is, you know, a key thing in terms of how we use our vestibular systems. The robots have gyroscopes and accelerometers. Just like your cell phones do, they use that information to keep upright. And the same types of technology then are applied to a lot of different types of projects in the lab. So we have done a lot in terms of the DARPA Grand Challenges, if you know some of the thing about that. Some of the team members that did robot soccer, they helped us with the 2007 Urban Challenge, which was kind of trying to develop self-driving cars. At that time, our Penn RoboCup team, a lot of those members went on, and we put together at that time a Prius with a bunch of sensors, just like these robots have sensors. The car had a bunch of the same types of vision sensors, as well as the LiDAR sensors. Then you had the actuation, just like you had to do motor control. You had to do control of the steering wheel and the brake pedals. And then you would basically put together a system that could drive itself. So this is back in, you know, seven years ago. This is, I think, one of the first self-driving Toyota Priuses at the time. Now Google has them, right, all over the place in California. But what you see here is this was during the Urban Challenge. You have the sensing system taking in information. It's driving. It's now steering this steering wheel. It's hitting the gas pedal, the brake pedal. And then internally, just like our robots do, they have to keep track of a map of the surrounding environment. And so what these robots are doing is taking in all that information, making a map of where it is. This is this red triangle, what the asphalt looks like, what the ground-grants stand here looks like, what the different barriers look like. And this is the types of things that we have put together in terms of algorithms, the sensing, the planning, the control. It's the same actually architecture that we use on the robot soccer team being used in these kinds of self-driving cars. And just to show you kind of how sophisticated that can get, just like in terms of making a decision in robot soccer, you have to take in this information and decide when to pass, when to play defense, when to play offense, where to go. In the same kind of idea, you have the same type of decision-making. When you're driving in this case, you're getting to a four-way stop sign. This is our Urban Challenge car. It gets to this intersection. The human driver here has gone through the intersection. We come to the stop sign. There's actually another robot car here, which is stuck in the intersections. So according to the rules, you have to follow rules. There's a four-way intersection. The first person in the intersection has a right-of-way. This robot car has a right-of-way. We're waiting for this car to go through, but there's a problem with that guy's software. So the rules then say after 45 seconds, we can go out at turn. So we're going to make a right-hand turn after this. We'll start signaling and we make the turn. And then what we see is there's another robot car that tried to illegally pass this robot. This is actually Cornell's robot here in front of us. You can see now all these human drivers are saying, these robot cars don't know where they're going. It's going crazy. The human driver backs up. Then the robot is able to perceive autonomously. There's enough space to actually pass around the stuck robot and then continue on the lane. So the same type of decision-making that you need to make in this kind of real-time scenario with the robots is now applicable for things like self-driving cars. And then we also have, as you saw, not just one robot, but many robots, so teams of robots. This is another research challenge. How do you get robots to collaborate with each other? How do you collaborate with humans? This is all part of the technologies. We also have in the lab, we have flying robots. This is another project that students at Penn have been working on. Here we have a quad rotor. It's flying. It's actually mapping the world in three dimensions. It's able to then plan its trajectories, go into different places, see the world in terms of three-dimensional objects, can map out a room basically in higher fidelity than you get from your blueprints. These are the types of things that you can do with robots on these kind of simple platforms now. And then, as I said, robot teams. Professor Vijay Kumar at Penn have been working with collaborating between decision-making and having now this is a fleet of 20 of these quad rotors flying, so they're going up and then being able to then do the trajectories, in this case making different formations in the lab, being able to fly in coordination. Just like when you play soccer, you need to think about formations. You have to think about how the robots here can kind of decide for themselves which positions to go to, how to control that, how to kind of map them, and how to deal with obstacles. In this case, this is a flying window that they have to fly through. So these are the types of things that they have to make these decisions, be able to control and be able to coordinate all these things together. And finally, more recently, we've also done, you know, there was an urban challenge. So something for DARPA now called the Roboics Challenge. This is a search and rescue mission. We have recently just got, you see these small humanoids. We now have in our lab a large humanoid robot who can have basically can manipulate objects as well as walk around. And this, you know, the idea here is that can you make a robot that could say survive an environment like in Fukushima and be able to manipulate the kind of valves, open doors, go through these kinds of environments. Recently, what we demonstrated is with this large scale humanoid robot. That's a big version of the small one that you see here. We have this large scale robot sitting into one of these utility vehicles, and you see it driving. It's a regular car, and it's using its sensors. This human operator is helping guide the robot where it should go, but then you see the robot turning the steering wheel, hitting the gas and brake pedals just like using the feet of the robot. Just as you would manipulate a ball in soccer, you see it manipulating the gas and brake pedal on the car. So this is all I wanted to say in terms of some of the overview. We see that now with the kind of hardware technology that we have, basically the same amount of computation that's in your cell phones. We have in robots, we have these actuators, but the intelligence is still lacking. What we need to do is think about how we can integrate the kind of the perception as I talked about with the control and planning. I still think that we're still far away from this 2050 goal of being able to play with a human team on a real soccer field, but what we'll do right now is demonstrate a little bit of what they can do currently in terms of the limited technology that we have. Take it away. What we'll do is we'll first, let me pull this up. Which one do you want to start with? This one? We'll start with the Darwin. Are you guys ready? So yeah, so these are autonomous robots. So again, as I said, their main sense here is they have a camera. It's a vision camera. And what they're going to do now is track this ball, and then they're going to try to kick it. So here there's, you know, you can see them balancing and kicking and then being able to track the ball, getting to the ball. So it's being able to shoot like this. And then just tracking the ball as they go around. And then if you hit them, you'll get it back up. So it's using its inertial sense to be able to do that. It's able to turn. And then kicking, in this case, it's just kicking the ball. So it doesn't know which direction. It's basically in a mode like five-year-old soccer. It's just going after the ball as fast as it can and chasing it. So that's what this guy looks like. So you see the speed of this? We have then the larger ones. So why don't you run the now here? Okay, so here. Yeah, go ahead. Let's run it. Okay, so this is another robot. This is actually a standard platform that was designed by a French company, Aldebaran. Let's go, let's go. So yeah, so you see here, there's a bigger robot, so it's more mass. It has to figure out how to get to the ball again. What's wrong? So he's a little confused. Obviously this guy is a little... So he's... What is he going on? So he will also get up if you follow him. Oh, he should be getting up. He's a little harder time in doing this. You guys want to run the other guy? Come on. You know, bump into each other. Why don't we run two of them then? Go ahead, run the other guy. Okay, I think his state machine is off. Okay, you guys. You guys, why don't you run all of them then, guys? So one of the things that can get confused with robots is, you know, it's again the coordination, and right now we're not coordinating our robots. So one of the hard things to do... Right, is just like in five-year-old soccer, if they're not coordinated, which we don't have a network right now to do this, they need to send... Just like in the cloud, they need to send messages to each other, and they don't know kind of who's playing offense, who's playing defense, and that's the way you coordinate is to kind of communicate with your team who should go where. So you can see kind of how they respond to that. And they push each other over. And it's kind of... You can also have a demolition derby here with the robots. And so you see what happens is when they fight for the ball, you know, they actually have penalties in our matches. So if the robot does push over the robot too hard too aggressively, it will actually go in the penalty box for a short period of time. Well, and you see balance is very hard with two legs, especially when people are pushing you at the same time. So I think this guy's dead. And the reason why the arms, whoops, this guy's head came off. So you can't see. And also you see the way we do the arms is because there's a penalty, right, if you actually hit someone with your hands or you touch the ball with the hands, so they keep them in. That's the reason why they have such a funny stance. Where's this guy going? I think there's a lot of lights and flash cameras and things that confuse it. Let's clear this out. He's not doing anything. So you can see that they're not going to beat the Brazilian team anytime soon, as you see the technology. All right, that's enough bullying. All right. Okay, I think that's enough for robots. You can give my, that's your arm. Okay, here you go. Okay, watch out for the chairs. We need the chairs here. Well, first let me introduce myself. I'm Josh Levine from Slate. I host the sports podcast, Hang Up and Listen. We generally talk about human athletes, but I'm very happy today to be talking with Dan about our greatest robot athletes. And the thing that really strikes you in listening to the audience is that people really are rooting for these guys and they want them to succeed and you feel this kind of connection to the robots. Do you think that that is important in pushing the research forward to have people feel this kind of connection to these robots? I think, as you saw, there's an anthropomorphic kind of aspect to this that kind of trying to understand a robot is a little bit trying to understand yourself. Right? And I think that is, there is a connection in that sense that you're trying to understand what is the decision-making process behind these things or the control. And I do think that the kind of what we call human-robot interaction aspect is going to be an important aspect because hopefully these things that someday will be in our homes will be kind of more of playing a big role in terms of our daily lives. So I think there is something to be said about trying to make the robots so that humans can understand them better. And I would imagine that there's a pro and a con to the fact that we are all familiar with humans playing soccer and the robots clearly are not up to the human standard because if you compare them to, say, the Brazilian national team, it's kind of an absurd display and you don't really appreciate how amazing it is that the robots can do what they do. So do you think that there actually could be a negative if we focus too much on the goal of having them be like humans? I think the interesting thing is the kind of question that drives my research is if you think about it, we think of machines as being better than us in a lot of things like the chess, jeopardy. But if you look at this, we see that still humans are much better than machines at something that we consider natural, right? Just running around on the field, falling a ball and kicking a ball. That is still a very difficult task. I think he says something about the challenge of when we try to build machines that emulate what we can do as humans, there are certain things that take you 20 years of school to learn that's easy for a machine to do, but it's something that after five years old, machines still can't do. And I think that's kind of a little bit of an irony there in terms of what machines are good at versus what humans are good at. And you alluded to this a little bit before about the huge progress that's been made since 2003 to today. Can you talk specifically about what some of the things are that have been the progress that has been made? So definitely on the hardware side there's been a lot of progress. So for instance, the processors that you enjoy and your cell phones on your laptops have gotten much faster. And so the same algorithms that we took minutes to do 10, 15 years ago now take only seconds to do. And so that's one definitely big area. Progress, Moore's Law. Two, kind of the ubiquity of having sensors and actuators as well as the computation being so cheap, right? The price of these now, robots have gotten down dramatically. So the fact that we can now buy a lot of these very easily for our students to work with is a huge advantage. But the main thing still missing is kind of the deep understanding of intelligence, how to kind of put that into a machine. And in these competitions, one of the really neat things is that you're competing against teams from all over the world. In 2013, the final match was US versus Iran which is rare in sports. Can you talk about what some of the other countries have been doing and your encounters with other teams that have been working on this project? Yeah, it's very interesting because we have, you know, just like the Olympics, right? Something like this brings together students from around the world. You see all sorts of cultures. All kinds of approaches to making a competitive team. And then you have the chance to kind of talk and to interact with these different teams. And so, you know, they're definitely, you know, being able to talk to the Iranians, talking to the Israelis, trying to see if they would actually talk to each other. You know, things like that, you know, talking to the Chinese versus talking to, you know, say, you know, some of the European or South American or Australian teams. And I think that's very eye-opening for a lot of the students. And from the beginning, the RoboCup has been a proxy for research into all sorts of different matters. It's not really about soccer. It's just a way to kind of push research forward, right? And so, when you've encountered other, you know, other teams, other countries, what are some of the more interesting things that, you know, are happening around the world? So, yeah, as you say, it's not just the soccer domain. It's really kind of thinking about artificial intelligence, robotics, what they can do in the general context. And so, RoboCup, not only the soccer, people, teams have been working on, you know, search and rescue types of robots, robots that they have a competition called at home. This is the idea of having some of these robots trying to do some tasks that, you know, would be helpful in the house, right? Loading your dishwasher, you know, finding your keys for you, right? Things like that they have also focused on, so it's much broader than just the soccer. When people watch the competition for loading a dishwasher, do they get as invested and the robot's doing well? Is it as funny if they fall over while loading the dishes? The problem is that the robots are still very slow at doing these tasks, so kind of the tension that you see in a soccer match doesn't quite translate to loading the dishwasher, I think. So, I think definitely the sports aspect, you know, really makes this appealing for, you know, students and kind of the younger folks who are getting involved. So, in the sports domain, one thing that's come up a lot recently in a bunch of different leagues is umpiring and instant replay systems and that is one, you know, application of essentially robots where you don't have the action, you have the perception and the cognition maybe coming from this background. Do you see that as something that's realistic and umpire, whether it's for a referee for soccer for baseball, for tennis? Is that something that we could see maybe before 2050? So, I think you're starting to see some of that, right? I think there is some technology now that has been, it's controversial about detecting goals, right? Right, in the World Cup this time around, yeah. You're going to use this technology now to detect the goal rather than having a human do it. But, say, calling off-sides in a soccer match. That's a very complicated decision, right? Because you have to also know about intent and things like that. And that's something that I think the machines have a very difficult time with. Right. And so, in tennis, for example, when it's just ball and line, that seems like it's an easier challenge for a robot than, for example, in football where it might be ball, ground, hand, sideline. Or, like, unnecessary roughing, right? What is the intent of the defender or someone doing something? That's a very kind of subjective decision that I think a machine has to have a lot of kind of understanding of the world to be able to do. And that's still lacking. So, I think, as you say, some of the simple decisions where you just have a ball and a line and you have a sensor like they use in tennis with it, the camera's going at a thousand frames per second, this is doable. But things where you have to understand what the team strategy versus what the player's intent is, that is still, you know, something we don't understand how to do very well. And as far as the RoboCup, what do you see as let's throw 2050 out the window for a second, what do you hope to see in, you know, 2015, 2020, if we come to see a demonstration, how will these robots have improved? So, I think one of the challenges is actually to go outside, for instance, you know, having rough terrain. As you saw, kind of walking around with two feet is much easier to do on kind of flat level ground. Once you go to something where you have a little bit of rocks or having kind of uneven grass, that makes it a lot harder for these legged robots to move around. Also too, as you saw, lighting is important. So, you know, a lot of these robots, right now, if they're relying on vision and then goes behind a cloud, that's going to be very difficult for a machine to be able to deal with that kind of situation. So, maybe by the time we get to 2020, we'll have some kind of improvement in those abilities. And we saw a bunch of stuff that I would characterize as funny today. People seem to get a kick out of it. What is the funniest thing that has happened in one of these competitions? Like, has there ever been a case where all the robots decided they didn't want to play soccer and just started to play a different game, and then started attacking people? Yeah, there have been some matches where, among the weaker teams, where all the robots have left the field, and so you just see the ball sitting there. I've seen a match where the robot, you know, after making a kick, fell over and decapitated itself, had the head roll across the field. That was a little bit spooky. So yeah, there's been many things. Did the other robots start kicking the head? It wasn't painted the right color, so it was okay. With some of the goalkeeping of that Japanese robot, I think that goalkeeper could actually start for England this year. So I think we're making some good progress. Should we take some questions from the crowd now? All right, we've got a microphone there. Hi, yeah, a great demonstration. Thank you. I wonder if this makes us rethink what intelligence really means. If these machines are having such a hard time with motor skills, whereas they could learn things, let's say, that required factual information more easily, or required understanding the rules of a game like chess, does it make us not only reconsider how motor skills take a lot more intelligence, and also maybe the animal kingdom should be more respected in that sense too, because of their applying some tremendous skills when it comes to predators and everything else out there. Does it make us reexamine the whole idea of intelligence? Yeah, I think so. Some of the simple things that we take for granted, not just in the motor control domain, but in the perception domain, that when we go outside and the sun comes out, we don't get blinded. We can still see balls and goals and lines, and this is still difficult for a robot to do. So I would say even the stuff of perception as well as the motor control, we are not as kind of, as you say, as good as the animal kingdom is. And there's a whole field in a box talking about bio-inspiration, talking about how we try to learn from animals, try to build that into the machines, but we still don't understand ourselves, and it shows that we don't understand ourselves in animals as well as we should. And as you were saying, I think one way to look at it is that the robots have certain weaknesses, but you can also look at it as an appreciation for everything that people can do, things that we don't think about, what a miracle it is that we can play sports at the level that we do. Yeah, no, I think it, I mean when you try to replicate what humans and animals try to do, you see very much the limitations that we have in terms of our technology. A lot of what we saw today was robots trying to be as like people as possible, like in walking on two legs and sort of balancing when they kick the ball. But if your only goal was to build a team that was as good as possible at rolling a ball into a goal, would it look like people at all? That's a good question. So there is a league in RoboCup, which doesn't have the restrictions that you saw, that you don't have to use legs. And so they have machines that are on kind of two wheels, they use wheels, and they're kicking the mechanism is basically one team had a couple years ago they basically grabbed the ball they had a pneumatically impelled kicking mechanism. So they basically had a cannon that shot this ball out. 50 miles per hour it would shoot the ball out. So the way it played soccer was basically one of these machines would come grab the ball and it was just like a tank. It would just basically have a turret that turned towards the goal and then lined up a goal shot and this ball would come shooting out. And so it did very well in soccer but I don't think that's it's not as interesting in terms of as you saw the dynamics of the game when you have a simple machine that is in what we call kind of a very engineered solution to the problem. Yes sir. Quick question. We saw the individual robots do you have when you're talking about a team, do you have ways that are they communicating with each other and how do they do that and talk about the challenges of doing that? Sure. So the issue here is that we don't have a centralized computation. So this is something called distributed. Just like when they talk about the cloud computing there's going to be a whole bunch of agents and things distributed in the cloud it's the same way with our robots. There's no one absolute kind of king that's deciding everything for all the other robots. And so what they have to do is they essentially have to negotiate with each other. So for instance when two robots see the ball they actually send messages to each other saying I think I'm closer let me go get it you know no you're farther away let me go get it and then kind of a protocol that they have to use it to make that decision. This is complicated to do and then sometimes you see a problem where one says I'll get it and the other guy says I'll get it and the other guy says no I'll get it and you have this kind of deadlock where they're fighting each other for the ball. So this is what happens with the communications and we're right now using wireless communications but in the past we've also had problems where the wireless environment where let's say you know the Wi-Fi doesn't work very well and then what happens is you know they completely then can't talk to each other so we've actually used in the past acoustic communications where with the Sony the dogs there they would actually bark to each other and that was a backup way of overcoming the wireless communication was that they would signal to each other acoustically just like you know how players play they'll say you know I'm getting the ball you should go get the ball. Sort of to combine the last two questions it's interesting to think about what it might be like if the action perception were both at human level if we get to that point and then could there be this is just a thought experiment could robots teach us how to play soccer in a better strategic way than we play as humans I think kind of playing off of that question maybe they won't beat us with with better you know force or by being able to see better but by actually being able to I would think us on the field. So I think yeah the robots would definitely be better in terms of knowing exactly say where they are at and kind of maybe they would be able to coordinate their plays a little bit better but I think the kind of what people think is important in soccer to have the kind of creativity in the game I think that would be something that humans still would have an advantage over there because let's say that we do design a robot team and we have some number of set plays I think by analyzing that the human could actually figure that out and figure out how to kind of counteract that set of plays and so there's going to now be this what we call game theoretic formulation where one adversary versus another and the humans still have the advantage in that set and then the humans would save the world by winning that game. Well that's in America and in Japan Japanese culture the robots saved the world right so it's a big technological difference. I was going to ask about like the size of the robots because every time I see like a pretty complex demonstration it's on a very small scale and I don't know if that was because of like the cost or if like you it's more difficult to make them do complex tasks on a larger scale like with unlimited resources could you make like six foot tall robots doing the same thing or what. So we are trying to do some larger scale robots as you saw the one that was driving the vehicle it walks it's about five foot tall but if you think about it's harder to do because if you make a robot twice as tall it actually weighs eight times as much alright so the kind of the weight goes up by a lot and the kind of the torque the kind of strength of your actuators doesn't go up proportionally and so making things bigger is actually very much more difficult because of the fact that your motors are not as strong you're much heavier and then kind of the ability to kind of balance yourself becomes much harder and that's why actually in the animal kingdom you don't see that many things walking around that's bigger than an elephant right it's it's very hard to scale things very large. Yeah and you see kind of in the animal kingdom I think you see a lot of you know two-legged we're kind of the optimal size at some point two-legged maybe some smaller I know monkeys too that can do it. This is basically a question of where the bottlenecks are if you could increase processing power per unit by a factor of 10 at this juncture would it make any difference? That's a good question so there are some challenges where you try to do some of these tasks but you slow them down so you can maybe send off all your sensor information to a supercomputer in the cloud somewhere and try to process that I still think that we don't understand the fundamental ways of doing perception right we don't have computers that can recognize objects in the world as well as humans do still even though there's been progress on this we don't have computers that can do the kind of acoustic recognition as well as we do in terms of hearing although again we've made more progress on speech but the kind of noise in the environment and the uncertainty is still a big challenge even if you gave a lot more processing time a lot more computers I still think we don't fundamentally know how to deal with that uncertainty in the environment and branching off from that what if you got rid of the opponent and the goal was just to do for example a drill where you had the robots just passing it back and forth to each other would that be something that would be easier to accomplish is the fact that there is an opponent one of the major challenges yeah so I think yeah so definitely having an opponent makes it more difficult because you have to anticipate what the opponent does but even the basic question of as you say just doing a kind of a skills challenge is still very difficult I mean you know one of the tests I always ask my students to do is how well do you do against no opponent right yeah and you know sometimes you know if they're not doing very well even if it's an empty goal they still can't score right so the basic skills is still challenging but then once you add the opponent it gets even more daunting alright thank you so much Dan that was great thank you