 So I'm going to turn this around. You look at this debate about risks versus opportunities, it's like a game of rock'em sock'em robot. Why is it all risk and all opportunity? I want to flip it on its head. So when I looked at this, I've been really struck by the fevered pitch of this panic, a brown AI and robotics, especially in the last nine months or a year. One of these articles says, I as a reporter have covered the aberrant, autonomous, amoral aberrations that are these machines and have not yet managed to convince the public that they eat our babies for fuel. Now when I read something like that, I have to think that it's not really about the technology. And so I started looking into the history of technology. And it turns out that moral panics, panics about technology that seemed larger than they could possibly be, reflect panics about who we are and our goals. At every stage in the Industrial Revolution, there's been a fear about what technology was going to bring. The alphabet was going to wipe our memories clean. The telegraph was going to prevent us from learning how to write in full sentences. Especially when those technologies resemble us, the fear is greatest. These are wonderful automata that were created in the 18th century. And this is a short story that was written about them that demonstrates the fear in 1817 that we would no longer be able to tell who our loved ones were, human or machine. So perhaps what we're really scared of is who we have become, what our goals are. Whether we are going to develop machines like our better selves or our worst selves. And here we have a choice. To talk about this further, I'm going to rely on perhaps the most important theory in cognitive science, the theory of distributed cognition. It says that thinking does not just inhabit the mind of one person, but exists across a group, a group of people or people and machines. And it's not just thinking. Imagine that robots can be interdependent with people for physical strength. Imagine that rather than just prizing autonomy, we prize a robot ecosystem that is interdependent with the people it exists with. Let's take that a step further. Imagine that that interdependence also extends to the social. This young girl was just diagnosed with high functioning autism. She does not have the ability to make the social relationships that allow her to learn in a classroom. This robotic playmate has taught her those skills and our research has shown that it extends to when she's playing with other children. To do that we need to study child-child interaction. Here are two children who are working on a science task, typical third grade ecology science class. Now, they're doing really well at the social, but their science is a little weak. If we build the social abilities into a robot, but also build into it the ability to scaffold science, we've been able to show that children will reason better about science if they can form a social link with the robot. Of course, technologies like these require a different kind of AI. They require the ability to engage in social reasoning and social intention recognition, something that AI really hasn't touched very much. This is a technology where these girls are able to interact with robotic characters that know algebra. Interestingly, a robot like this that has the ability to automatically classify the level of rapport between the child and the robot, which this robot has. We've been able to show that, first of all, it engages in behaviors that you might not expect, like teasing and insulting. But it also engages children in better learning. I've been building robots like these for the last two decades. And in each case, I begin by a year of ethnography around people in the task that I'm hoping to build robots for. It's understanding people, their goals, their desires, their best sides that's going to allow us to make the right choice and take the right risks. The choice is, after all, ours. If we choose to understand who we wish to be and build robots in that image, then it's the right kind of risk and the future will be bright. Thank you.