 Welcome, everyone, to what I think is going to be one of the more insightful, informative, and I think inspirational panels of the entire forum, and that is because today we're going to talk about not just the future of human-robot interaction, but some amazing things and really inspiring research that's happening in the present. That's because I am seated next to one of the world's foremost experts and researchers on human-robot interaction, Professor Hennie Edmani, who is at Carnegie Mellon University and is doing some absolutely mind-blowing fundamental research. So before we hear from her, I'd actually like to ask your boss to come on up. My boss is boss. Your boss is boss is boss. He is a computer scientist and the president of one of the world's foremost institutions, Carnegie Mellon University, Farnham Jahanian. If you would please come up. Thank you very much. Thank you. Well, thank you very much, Amy, for that introduction. Actually to be honest, I work for Hennie. She doesn't work for me. That's the way it works. Good afternoon, everyone, and welcome. I am delighted to have the opportunity to introduce my colleague, CMU faculty, Hennie Edmani, to you this afternoon. As everyone knows all too well, we are at the center of a societal and economic transformation that's catalyzed by automation, artificial intelligence, and unprecedented access to data. These advances are transforming every sector of our economy, from healthcare to finance, to transportation, to mobility, to manufacturing, and yes, even education. In fact, as the lines between the virtual and the physical world are becoming more blurry, in fact, in some cases they're disappearing, digital devices, whether sensors, computational devices, communication devices, controlled mechanisms are being deeply integrated into environments all around us. Over the past several years, I know many of you have seen this, autonomous technologies have become more and more sophisticated and are now being, in fact, used in soft driving cars, in robotic surgery, by search and rescue robots, and the list goes on and on. The rise in human-robot interaction is precisely why the work of Professor Hennie Edmani is so critical and so relevant today. Professor Edmani is a distinguished member of the faculty in the School of Computer Science at Carnegie Mellon University and is a rising star within, I told you, I was going to embarrass you, and is a rising star within our school's robotics institute. She also leads the human and robot partners lab, stands for HARP. Her research focuses on developing nuanced human-robot interactions by teaching robots to detect, interpret, and respond to human behavior. She leverages interdisciplinary knowledge that she brought together from computer science, from robotics, from cognitive psychology to build mathematically rigorous algorithms that are grounded in human behavior research. I should add by saying that, as all of you will see in a moment, Hennie's work aims to integrate technology into our lives in ways that enhances our humanity. Once again, I want to thank you all for being here. Please join me in extending a warm welcome to Hennie Edmani. Thank you, Farnam, for that lovely introduction. Thank you all for coming today to talk about robotics and human-robot interaction when you could be listening to Angela Merkel talk about European Union. So today I'd like to talk to you about robots and the role that robots play in our society. And I want to start with a story. So in October of last year, a company, a robotics company rolled out a new product in Pittsburgh. This was a little robot, looks kind of like a cooler on wheels, that's designed to deliver food from a restaurant in town to college students on campus. And this robot, the interesting thing about it is that it was designed to autonomously navigate the world. So on its own, it would drive to the restaurant to pick up the food and then roll down the sidewalk and drive to a college dormitory or an apartment and deliver that food to a student. Now this is really interesting technology, but within a few days of the deployment, a tweet appeared as they do. This tweet was from a doctoral student at University of Pittsburgh named Emily Ackerman. And Emily uses a powered wheelchair to get around. And as she explained, she encountered one of these robots as she was crossing one of our busy streets in Pittsburgh. Now the robot by design waits to cross the street. It doesn't cross the street when there are people coming at it. And it waits in kind of the center of that depression that goes from the sidewalk down to the road called the curb cut. You might be familiar with the curb cut if you've ever rolled a heavy suitcase or pushed a stroller or a pram. And Emily uses the curb cut to move her powered wheelchair off of the road and onto the sidewalk. Now that day that Emily encountered the robot, it was waiting as it was programmed to do in the curb cut. And that meant that she didn't have enough space to drive her own wheelchair onto the sidewalk without having to bump up onto the curb, which can be quite painful for someone who uses a powered wheelchair. So Emily posted this account of interacting with this autonomous robot. And the fallout was very fast. Within hours, the University of Pittsburgh had frozen the beta test for this robot and the company had pulled the robot and started to reassess how the robot was programmed and designed to do the street crossing behavior. They've since reformulated some of the algorithms and maps in the area and they've redeployed the robots into the space. But there are still questions about how these robots are gonna interact with people and in particular the accessibility issues that having a robot on the sidewalk presents. So this kind of robot represents a turning point in the field of robotics. Until now, our most successful commercial robots looked much more like this. So these are robot arms that are in a Tesla factory and they're designed to very, very quickly and efficiently and safely weld together vehicles. These robots work 24 hours a day. They do the task much more safely than a human welding would do the task, which is great. But they're actually specifically designed not to work around humans. So I don't know, I want you all to look, who can see the human in this photo? Raise your hand when you finally spotted the human in the photo. So there are four hands up, Amy's seen it, all right, a few more hands. So if you don't see it, I'll reveal it. You got it, you got it just before, yeah, it takes a while. Well, and it's because the human is the small part, and you can actually see they're behind the glass. They're being protected from the robots. The robots are really dangerous, they move really fast. They're not sensing their environment, looking for people. That's what we've been doing. And there's been a ton of success and automation that's really accelerated the pace of a lot of autonomous manufacturing. But I said we're at a turning point. The new kind of robotics, the new exciting research in this area, focuses on robots that do work with humans, that are in our worlds. And we're already starting to see some of these products and devices out in our spaces. So in Pittsburgh, we have autonomous cars driving around our streets. You can see they have the light are spinning around on top. There are robots that do deliveries in hospitals, bringing medicine or linens to nurses stations. There are robots that clean our houses. One of the most successful home-based commercial robot is the Roomba. And these robots need to deal with the complexity of our world. They're not in a factory. They don't have a very organized, structured, well-lit space. They need to deal with the fact that we can sometimes move furniture or that there are people around or there's clutter. But most importantly, they need to deal with the fact that there are humans in their environment. And humans present a really important challenge both from safety, but also from efficiency and effectiveness. These new products are designed to work with and around people. And so in order to do their job, they have to understand the needs that people have. I'm in robotics because I am fascinated by people. And that might seem strange at first until you realize that robots that are going to do a good job also need to understand people. So I'm fascinated by the way that people can collaborate and form teams and work together using sophisticated communication, primarily verbally and non-verbally. So people, when they're interacting with each other, can coordinate together. And we can use language to coordinate for sure. But a lot of what we do involves our eye gaze, where we're looking, our gestures, our body language, our facial expression. There's a lot of non-verbal communication that happens when people are working together. So think about the last time you cooked with someone else in the kitchen. And how you coordinated not running into each other and dividing tasks and sharing tools often without having to use language. My goal is to make robots capable of this kind of complex, non-verbal communication. So that when they work together with people, they can understand what people are trying to achieve and actually help people at the right time. I work in a lot of different contexts within robotics, but the area that gets me most excited is this field we call assistive robotics. And these are robots that help people who have some kind of impairment, live more independent, higher quality lives. So these assistive robots could be physically assistive robots that physically hand objects to people that help people eat food or take up a glass of water and take a drink. There are also socially assistive robots. These are robot tutors, robot coaches, robot therapy assistants. I'm not gonna talk about them as much today, but they're also an area of robotics that people in human robot interaction are focusing on. I wanna say here that it's really important to understand that not every accessibility problem needs to be solved by a robot. There are many low tech accessibility tools like the white cane that someone who's blind or low vision uses that work really well. But as we start to build robotic technology, we start to find that we can have robots that help people in new ways and really increase people's capacity using these kinds of robotic technologies. And that's what I look at. I really am fascinated by how we can make this kind of robot that you see here intelligent, so that it can assist people in providing them what they want at the time that they want it. And one of the ways that we do that is by looking at how people's nonverbal behavior, and in particular how their eye gaze indicates what task they're trying to achieve. If the robot knows what task you're trying to achieve, then it can take assistive action to help you. To do that, we bring people into the lab and we examine how they react, what their nonverbal behaviors are, when they're doing assistive tasks with a robot. So we brought people into a lab, asked them to spear pieces of food with a robot arm. And as you can see in the video that's playing, the person who's operating the robot using a joystick is giving us a lot of nonverbal behavior, right? There's a lot of indication of when she's concentrating, when she's pleased with the outcome, what target, what her target is on the plate. And we can take advantage of this to have the robot respond. We can take advantage of this to have the robot respond at the right time. I'm gonna show you a video now of what it looks like from the first person perspective of someone's eye gaze in this task. So here's a camera, the person who's wearing, looking out. And this person's wearing an eye tracker, and the pink dot represents where the user is looking as they do this food spearing task that you just saw. So in this case, you can see that mostly the eye gaze goes between the hand of the robot, which is what the person is controlling with their joystick. And the food that's on the plate, their target of their manipulation. By using this kind of information, this planning glance, down to the target of the spearing action, we can actually have the robot predict where it needs to go and start moving in that direction, or at least start turning the fork in the right direction to help the teleoperator, the user who's controlling the robot with a remote control, do the task more quickly. The interesting thing about eye gaze is that it doesn't just tell us what people are trying to do, like which piece of food the user wants. But it also tells us when someone starts to have trouble with the task. So here, in this video, I'll keep playing it. You can see that at the beginning of the task, most of the eye gaze is between the robot hand and the food. But in a minute, this operator is gonna make a mistake. They're gonna drop the robot elbow below the level of the hand. And that makes it very hard to spear the food. And they know that this has happened. So right here, they know it. And their eye gaze patterns switch. They stop looking at the plate of food for the most part and start looking at that problematic elbow. Here's another view, different time, it didn't take much for people to have trouble operating this robot. And you can see that their eye gaze really focuses on parts of the robot that are problematic. And this is amazing because the human has not told us that there is a problem. They haven't timed out on the task. The robot is still in a valid position. But our system can now know that someone is struggling and needs it to kick in with a little bit more help. So my goal is to then have the robot actually do that assistance, reposition his elbow, and then give control back to the user. And that's what we're working on right now. Now, it's not just eye gaze that's informative in these kinds of assistive robotics tasks. We look at a whole host of nonverbal behaviors, including body posture, facial expression using facial action units and key points on the face. We can monitor someone's pupil size and tell when they're under increased cognitive load in certain conditions. And because we're academics, we release all of this data to the public for free. So after this talk, you look up my lab's website. You can find all of the data that you're seeing here. It's five hours of video downloads. It's a two terabyte download. So I don't do it on web, internet, do it on faster internet. But it's data that we're hoping that people will be able to use to find new connections between how humans behave in a task and what the robot should actually do. My long term goal is to make assistive robots that people can use every day. To do everyday tasks like roaming through the world or eating a social meal. And in order to do that, these robots that work very, very closely with people are going to have to understand what someone's trying to achieve, what kind of help they need, and when they actually need that help. And so my vision for robotics and human-robot interaction is that as we build these autonomous technologies, as we see more robots coming out into our world, we take the human into the equation early, and we build systems that understand human needs and human preferences. Thank you. Okay, I'm confused. That's a job done. So I am your professor at CMU. I'm a professor at NYU. You're a roboticist. I am a quantitative futurist. So a lot of what I do is think about the long-term risk and opportunity, try to model all of that out using data. And a lot of what I'm hearing about all these robots is that pretty soon they're going to be anthropomorphized and come and murder us in our sleep. After they've taken all the jobs. What you are telling me seems to be a little different. So my question to you to start off with is, how did this conversation get so off the rails? Because what you've shown me is very much human-centered approach. And somehow that is not what we talk about. Any thoughts? Yeah, so many thoughts. Where things went wrong? All right, how many people have seen Star Trek in the audience? Yeah, so there's a lot of hands being raised for those of you who can't see. How many of you have seen Big Hero 6? Okay, fewer hands, probably people with kids. There's a tremendous amount of media around robots. There have been robots in movies since 1930, since Metropolis. And as a society, as a global society, we have this vision of robots as anthropomorphic beings, where anthropomorphic means human-like. So they have two legs and two arms and a head and they walk upright. And they are incredibly capable, and they understand our language, and they understand our social norms, and they live in our environments. Sometimes they don't follow our social norms. I think media helps us dream these big dreams, but the reality is really far from that. How so? Because I'll tell you, in the mid-1950s, Electro, the Moto Man from Westinghouse, they had a walking talking robot that also smoked cigarettes. People were very excited about this. Classy, yeah. So I guess what do you mean? Like the fine motor articulation isn't there yet? Or tell me how the expectation is so off from reality. We are really good at doing a lot of, right. We're great at walking. We're generally, we're great at picking up small objects, even if we're not sure how much they weigh. And we totally take for granted those skills. But when you think about a baby, a baby takes years to develop the motor coordination to tie their shoelace. They take years to develop the language skills. We're kind of expecting a lot out of robots. And the reality is all of those things that we take for granted are very, very challenging problems. And when we're doing it with digital systems, with computers, we don't even know, we often didn't even know where to start. So in the 1950s, there's this kind of joke. It's a story that really drives home how idealistic I think the community was, right. So in the 1950s at MIT, some researchers founded the MIT summer vision project. And the goal of that was to take three undergraduate summer interns. And they were just going to solve computer vision. They were going to write a program to recognize objects in the world. And then- At a time when televisions were like half the stage. Exactly. And then they were going to go, you know, and then they were going to go solve the hard problem. Sure, of course. And if you know anything about computer science, do you know that computer vision is one of our biggest fields still? We're still solving object recognition. In the last five or 10 years, we've seen this tremendous boom in object recognition capability. But this is still an open problem. What we take for granted, the fact- Why is it an open problem? Is that the corporate aren't there yet? Is it there's too much variability? Well, yeah. It's an open problem because to do object recognition from pixels, which is what a computer gets, is very different than the way we do object recognition. And we hadn't developed the algorithms that could map a pixel to a meaningful representation in the world. And why is that important as it relates to robotics? Right, so a robot, if it's going to hand you a glass of water, needs to know where that glass of water is in 3D space in order if it's going to reach out for that. And that's another challenge, right? So it's not just enough to see the objects and say this is a glass of water. It also needs to know the pose. It needs to know the position. It needs to be able to close the kinematic loop. So when I lift this glass of water, if it's heavier than I thought it was, I compensate with my muscles. If it's lighter than I thought it was, I might do that little lift. But I very quickly recover. And so building systems that are mechanically capable of very quickly recovering is also a challenge and something that other people are working on in robotics. Especially in environments where there are other variables. Because I think I've had at least one cocktail made by a robotic bartender. Novel, at first, absolutely terrifying midway through the process. Stuff flings. And I think we, again, that kind of works well in a conference setting where you've got a fixed set of variables. But in the real world, I mean, the example that you gave, was that spot? What was the name of that pit, the delivery drone bar? I don't remember the company. But there's a circumstance where you can't sort of game out in advance every single scenario, which brings me to my next question. Can we get techie? Is that OK? I mean, yeah. With permission? All right. So let's talk about the fact that humans are capricious and unpredictable. Yeah. OK. So if it's the case that we're capricious and unpredictable, how do you develop, what are you, what's the secret sauce? The secret sauce. How are you developing assistive robots knowing that we are so unpredictable? What do you do? We are unpredictable. But we are sometimes regular. There are sometimes things that we do that are consistent. So for example, psychologists have known for years that before we reach for an object with our hands, our eyes go to that object. About 500 milliseconds before I reach for this glass of water, if I were not talking about it and thinking about it, my eyes would go to that glass of water and then I would start moving my hand. And that kind of regularity, that negative information from cognitive science, we can apply in robotics to predict, OK, so someone is about to reach for that object. I shouldn't drive my robot arm right through that workspace at that moment. So we are capricious, but at the same time, a lot of our physical systems follow patterns. And what I do is try to take advantage of those patterns. The other thing we do is we personalize. So we use machine learning to collect data as we're interacting with people and modify the algorithms for that particular person. And this is especially important in socially assistive robotics because the way we're capricious is often our personalities and our emotions, rather than our mechanics and our physical interactions. So you're in a world famous lab. You can't possibly process millions of people. So how do you get enough data, especially in the United States our data laws are pretty lax. It's a little harder outside of the US, less hard in some places around the world. So what are you going to have all the data that you need to do this model? That's a fantastic question. And I'm actually delighted that you answered it. So the reason I made a point talking about our open data is that we need people to share data on robotics and human-robot interaction. It's so hard to get quality data on human-robot interaction right now because it's so hard to develop a robot that works with humans in a consistent way. What I showed you was in the lab. In the lab is a great first step. What we want to do eventually is work with our user population that actually has a motor impairment, work with people out in the world, not in a lab setting, maybe in a restaurant setting. But to do that, I think our efforts as academics get amplified when we share data. And I think that's actually a really important component of being an academic is that that data is open and available for anyone to use. So who's your top share? Who are you sharing with now? Are you working with other academics? Yeah. Yeah. So we posted this data. We are in the process of we have a paper under review describing it. I know there are some academics at Stanford that have downloaded it and sent us interesting questions about it. But right now it's kind of slow. So we're trying to get the word out and have more people use it. I've got one more question, and then I'll open it. I've got two more questions, and then I'll open it up. One is a follow-up, the other one is important, so I'll ask. So here's the follow-up. You're doing fundamental research. You're doing incredibly important research. Is Amazon going to hamper this research? In as much as our big tech companies are collecting data, they're mining, refining, productizing, optimizing our data, some people are suddenly not feeling so great about that. Rumblings about new policy and regulations seem to be bubbling up here. Could something that is happening adjacent to you impact your ability to develop all of this research and work? That's a great question. I think, first of all, I actually, I'll make a plug for Amazon. So I did my postdoc with a professor who is now leading an Amazon robotics lab. Scout? Is that the Scout project up in wherever that is? No, this is a different one. Yeah, so he's based out of Seattle. But I think that there's the potential for companies and universities to work very closely together. Diplomatically said. I'm talking about real people. I mean, the future of robotics really, at some point, does hinge on people's willingness to be observed. Yeah, oh, yeah. And as we start to see these systems on our world, I mean, we're talking about robots that are in our physical space. There was a huge blow up a few years ago when iRobot introduced a camera on their Roomba. And they talked about recording maps of the home. And someone slipped up and said something that they didn't mean about uploading those maps to the cloud. And there was a huge scare around privacy. Because you're talking about people's homes, which are private. And the idea that that data could be shared and used by a company and sold to third parties is really scary. So I think as we develop these robot systems, we need to really build strong policy around where the data gets used and how the data get used. We do need data for robotics. We need data to train our systems. We need data to record the effects of robots. People always surprise us when they sit down in front of a robot. And if we weren't able to put people in front of a robot, we wouldn't be able to develop the robot that we do. But it's really important to respect the data. But do you think this threw in advance versus trying to regulate after the fact, right? 100%. And to involve technologists in the conversation. Because they understand how we collect the data, what it looks like, and how it can be used. All right. Last question for me has to do with Roomba. And robots in general. So assistive robots, human-centered robots, there have been commercial products, right? There's Jibo and several others. And for the most part, they've not been commercial successes. So does that have any bearing on the kind of work that you're doing? At some point, do we need to welcome the robots into our homes in order for you to? Yeah. No, actually it's a really great time for social robotics. What we've seen, so there have been a number of robotics companies that have proposed robots for the home, like Jibo, that have not delivered in a commercial way. And the way I see them, these are the palm pilots of robotics, of home robots, right? Palm pilots work great. Palm pilots work great. Let's not denigrate the palm pilot. Yeah. But who has a palm pilot today? Well? Nobody. Nobody raised their hand. But who has an iPhone today? You know, or an Android phone, right? So we are seeing the real beginnings of really viable commercial products. There are definitely going to be missteps and false starts at the start. I think as we understand humans better and human robot interaction better, we'll be able to build better algorithms. Our hardware will also become cheaper. And our robots will be able to do more. And I'm really hopeful that we'll see a lot more kind of robotics embedded in our society in helpful ways, hopefully. Right. All right, we have time for one question. So we have right there. Thanks. So if you could. Tell everybody who you are. I'm Eric Brynjolfsson. I'm a professor at MIT. And so one place we're seeing more and more robots is in distribution centers. If you go to an Amazon distribution center, you see these amazing Kiva robots that bring the shelves to the people. But the humans are doing the picking pack. They're reaching in, grabbing your USB cord and putting it in the box because robots aren't dexterous enough. They're trying desperately to make machines that are as dexterous as humans. What's your sense on where we are in that? Rodney Brooks thinks we're very far from that. Other people, Vicarious, think it's imminent. What's your take? Yeah, that's a great question. I come from two academic lineages. And one of them, Rodney Brooks is my PhD advisor as PhD advisor. And the other is my postdoc advisor, Srinivasa, who's working on dexterous robotics. So I'm actually at the confluence of those two fields. My sense is we should let people do what they're good at. And we should let robots do what they're good at. So people are very good at dexterous manipulation. There are many researchers working on dexterous manipulation right now. It's super important. But robots aren't quite as good as we are. People are also very good at empathy and social interaction and understanding. And robots are not. And I don't think robots should be. I think we should be developing robots that are amplifying human effort, that are taking things that humans might not have to do and taking over those roles and allowing people to be more human and to be more people. So that's kind of my take. So don't move yet. We are officially at time. However, this is an incredibly lively conversation. You have an incredible opportunity to talk to one of the world's foremost researchers in this area. And we've achieved special clearance to keep going. So if you would like to leave, please stand up and gather your things. I'm not gonna hold you hostage. If you would like to stay, we're gonna keep the conversation going until we officially get kicked out. Sound good? So please wait for the microphone because I think the live stream is continuing to roll. And as soon as it gets to you. There we go. Thank you. My name is Dr. Sanjeev Gannoria. I'm chairman of Advania Healthcare. And I've been involved in a very interesting robotic project for the last five years. But my first question is, how many of you have heard about Modern Times? How many of you have you seen Modern Times? How many of these robots have I seen? No, how many have you seen the movie Modern Times? By Charlie Chaplin. That was the first movie where they show assisted robots. So there's a robot actually feeding Charlie Chaplin and goes crazy and starts to put the food in every other orifice. No. It's quite funny. But it did smoke. Sorry? But it did smoke cigarettes. So anyway, coming to the point. So we have been involved and I wanted to get your comment on this. Whether you've heard of the KSS project? Say it again. The KSS project? Why don't you, I haven't. So tell me what it is. So the KSS project is a European Union funded project. So we got about 2.5 million Euro ground from the European Union and it's spanned across four countries. The University of Bedfordshire, our company was an industry partner, University of Genoa, University of Nagoya and Softbank Robotics. And we have created actually the first artificial intelligence software which is about to be commercialized and we have done commercial trials in our dementia care centers because we own about 3,000 beds in the United Kingdom. And it's amazing if you see how these robots are interacting with humans through artificial intelligence. So you can put it into a dog robot and the dog can start to have a conversation with a dementia robot in an open-ended manner. And to the extent that if a dementia patient wants to change, if it understands that the person needs privacy, it will just turn its head away and say, okay, you change. I would suggest that you look into that project because it's quite advanced and it might help you give some leads. So that, going back to, thank you so much for that. So that is a good question for you. Yeah. You had mentioned whether or not robots should have empathy. Yeah. What do you think, especially in cases where we're dealing with people who have dementia or other cognitive impairments, is introducing a robot into that mix maybe, what are your thoughts on that? Great, yeah. And I'm glad you brought up socially-assisted robotics because I didn't get a chance to touch on it in my talk but it's another area of human robot interaction that's incredibly important. One of the big areas within that that people have looked at is robots for care for older adults and in particular robots for people who have dementia. People have looked at a variety of different kinds of robots, often in pet form, often animal-like. There's the harp seal apparel. There's robot dogs, more abstract robots that interact with people and they seem to, in many cases, be able to soothe someone who has dementia. So it's interesting, right? It's kind of like having a pet or cuddling a stuffed animal but it's kind of animate. So it's better than a stuffed animal in that it can coo when you pet it. It can talk to you but it's better than a pet in that you don't have to clean up after it. I think those are incredibly interesting robots. I think that there are a lot of ethical questions around using robots for patients who might not understand that this isn't a real social interaction. I don't think that is a reason not to develop the technology. I think the technology is incredibly important. A reason to have the conversation now. But a reason to have the conversation versus regulate after the fact. Exactly. Understand where we fall as a community. So I was actually on a, had a conversation on Tuesday with Jodi Halpern, who's a bioethicist from Berkeley. Berkeley, yeah, thank you. And around this particular topic. So robots for elder care. And she called me a tech pragmatist which I'm really flattered by. Basically, I think tech should be achieving human needs. Should be solving human needs. And we should have just enough tech to do that but the tech is not always the end-all solution. All right. All right, now it's one last question and then one last comment. So I think I saw the hand in the back first. But I'll stay after. Hi, my name is Adrian. I used to work for Softbank Robotics. I now work for ABB Robotics, so very different. I went from social robots to industrial robots. I'd love to keep the conversation going on the tech and how it's ready or not. But just for the sake of moving the conversation along, I'd love to hear your thoughts on acceptance of robots. From what I've seen, we definitely don't want industrial robots in our homes. We're not really ready for that. I used to work on humanoid robots. There's a lot of questions and ethical questions and design questions around us wanting to have those around us too. So just curious to have your thoughts on acceptance of robots in everyday environments. Yeah, when you tell people you work in robotics, they ask about Terminator and they ask when the robots are gonna take your jobs. I think we have this vision of robots as scary and mostly humanoid. When you think of a robot, mostly you think of a human-like robot. But the reality is far from that. Robots take all sorts of different shapes and when you tell people what a robot could do for them, they get pretty excited about it. That a robot could help your elderly parent age at home longer or could help someone with a severe upper motor impairment eat food independently so they don't have to be fed by a caregiver. Now that caregiver can also have the meal and they can have a social interaction. So I think there's actually tremendous capacity for good but there's a marketing issue in some ways because we have these expectations of robots, I think as we see more robots out in the world, that hopefully will change. I get the last question because I'm the moderator and this is it. So now that everybody's encouraged and excited to learn more, where do they go to do that? Great, what is this amazing website that you told us about? So CMU.edu will get you to the university website. There are many, many, the Robotics Institute, if you count all of the people in it, have a thousand people in the Robotics Institute at Carnegie Mellon, including students, staff and faculty. So there's a lot of robotics happening. My personal lab website is a subdomain of that. So H-A-R-P.RI.C-M-U.EDU. We'll get you to my lab website where you can see the specific human robot interaction aspects that we're looking at. And the videos, people can show the videos that you showed them. You can show the video and download the data. Okay, so president of Carnegie Mellon University, Fern, I'm Jahanian. I'd like to thank you so much again for your opening remarks. Henny, I think you've changed everybody's hearts and minds. I hope, I know mine, on the future of robotics. And I'd like to thank you all for being such a wonderful, attentive audience with great questions. Thank you.