 Hi, I'm Ellen Loepton. I'm senior curator of contemporary design here at Cooper Hewitt Smithsonian Design Museum. And I'm so thrilled to welcome you to this awesomely nerdy program. This is going to be, like, better than any TED talk ever. These people are so wildly smart and cool and creative. So very exciting. I curated the exhibition Beautiful Users and the Process Lab on the first floor here at Cooper Hewitt, which are two projects devoted to the design process and how designers prototype and test and imagine and invent and specifically how the user is central to what designers do and defines us as a field. And that user has changed over the last 50 years from a kind of ideal type, a norm. Henry Dreyfus's Joe and Josephine to something else, to an active collaborator in the design process and to a diverse person with different abilities. And we're really celebrating that tonight with this very special and unusual program on robots with a special emphasis on robots as emotional partners in our lives, robots as creatures and beings who we can associate with and have emotional reactions to. Robots can fight our wars. They can clean our houses. They can take care of children. And all that might be shocking and appalling. And we'll talk about some of the controversies and things that stirs up in people when you talk about robots being emotional. So I think you'll really like that. So what we're going to do tonight is we have three of these amazing speakers. I'm going to introduce them. They will each come up and talk for about 15 minutes about their work and their research. And then we'll have some time for questions. We will close the event at 8 o'clock sharp. It's like a TV show. It will not run on and on. So I will try to keep it on schedule. So that you can get back out into the three degree weather, which awaits you. So Carla Diana will be our first speaker. She has a BA in mechanical engineering from my alma mater, Cooper Union, where I actually met her when she was an undergrad. She went on to earn an MFA in design from Cranbrook. And today she is a designer, author, and educator, whose projects include domestic robots, wearable devices, and interactive toys. She teaches in the University of Pennsylvania's Integrated Product Design Program. And she maintains strategic alliances with many other schools that are very lucky to have her, including Georgia Institute of Technology's Socially Intelligent Machines Lab. Go check that out. And the Visible Futures Lab here in New York City at SVA. And then she has a long alliance with Smart Design and a project that she did called the NETO Vacuum Cleaner is on View Upstairs and Beautiful Users. And that was a project she did with Smart Design that she'll talk about a little bit tonight. Ayanna Howard is associate director of research for the Georgia Tech Institute for Robotics and Intelligent Machines. And she is the Motorola Foundation Professor in the School of Electrical and Computer Engineering also at Georgia Tech. Pretty good. Her area of specialty is clinical robotics. Pretty interesting. Really dealing with users and how users interface and interact with robots. She holds a PhD in electrical engineering from the University of Southern California. Her research focuses on humanized intelligence, which is the process of embedding autonomous systems with human cognitive ability. In 2013, she founded Zyrobotics, which creates educational technology products. She's worked for NASA, so she is a rocket scientist. Matthew S. Johannes, right, holds a PhD in mechanical engineering and material science. His research areas include prosthetics, brain computer interfaces, and robotic systems. Dr. Johannes is a project manager at Johns Hopkins University, applied physics laboratory, where he is managing the modular prosthetic limb project, which is on view upstairs in the exhibition. He's working on a project for the Navy that concerns unmanned ground vehicles, which is basically like the Google car with guns, right? And he is project manager of something else called Squad X at DARPA. And for all of us that live in the Baltimore, Washington region, we know that military is really big down there. He is project, anyway, he's the daddy's principal investigator on many projects at Hopkins. And he is co-founder of Harmony Robotics, LLC, a startup seeking to bring assistive robotics into the homes of the disabled and elderly near you. So this is really exciting. I can't wait to hear what everybody is going to say. And I'm just going to pass it on to Carla. Thanks very much, Ellen, and thank you all. It's really an honor to be here. I am so excited. I'm going to be talking about robotics in our everyday lives from a product designer's point of view. And as Ellen mentioned, just a really quick bit about my background, my undergraduate degrees in mechanical engineering. And I did have the great fortune of taking history of design with Ellen. I didn't actually know that industrial design existed and then was pining away after it when I found out that you could really study the more human and emotional side of making products. So after several years working as an engineer, I went back to school to a place called Cranbrook Academy of Art, where the Eames is in the serenades and many people who are very inspirational in the design world were teaching and learning. And it really allowed me to balance my very technical background with a really creative one. And since that time, I've really focused my career around the design of physical objects that have some kind of electronic or digital component to them. I really love how things come to life when they can be programmed. And they can have dynamic interfaces. And I spent several years as a product designer at Smart Design, which is very well known for the Peeler, which is also in the beautiful user's show. But there I worked on everything from cameras to medical equipment, sports devices, and car interiors. And that's really been my fascination. But a couple of years ago, I started doing a little bit more of my own independent work because I became really fascinated with the future. And sometimes, client work can be, of course, wonderful, but very rooted in today. So I maintain my alliance with Smart Design and a number of other entities. But I've really been focusing on technology and how it's going to impact the future of design. And in that effort, I still do consult. But I also do a bit of writing. These are a couple of pieces from the last couple of years. And I'm also teaching a course that I created called Smart Objects at the University of Pennsylvania. And my most recent project is also about a robot, but a fictional robot. And his name is Leo the Maker Prince. This is a children's book about the future of 3D. So I've been weaving the future into all of my work, but in a real way, to show how we are really going to be affected. So as a little bit of background, one of the ways that I really stay in touch with what's coming up in the future is to maintain alliances with places like Georgia Tech. So as a visiting faculty member there, I had the honor of meeting Dr. Andrea Tomas, who was looking for a designer to work on a robot project that was called Simon. And it was to be an upper torso humanoid robot in the socially intelligent machines lab. Now, what it means essentially is that she was putting together a team to create a robot that would be set up for the purpose of studying how we might interact with machines in a really natural human way. So how might we have to avoid learning a mouse and a keyboard and commands and simply hand the robot an object, talk to the robot, gesture, let the robot know you don't understand, and all those kinds of things. So Dr. Tomas had a background from being on some teams at the MIT Media Lab. She had studied with Cynthia Brazil, who is really a pioneer in the world of social robotics, and had created this project Kismet, which was one of the first robots to study how might we train a machine to respond to us in a really human way? Could it actually gesture back at us? Could it wink at us? Could it nod at us? Could it let us know it doesn't understand? The following project after Kismet was Leonardo. And for Leonardo, the team at MIT, which Dr. Tomas was part of and Cynthia Brazil led, had approached a Hollywood studio with all previously and all the work following that. And they worked with Will Vinton Studios to create Leonardo. Now, I've seen Leonardo at the MIT Museum, and he's pretty fascinating. Everything about him seems quite real. If you even look at his fingers, he has kind of little folds, and you just want to touch them. But what happened from the point of view of studying how people would interact with a robot is that this robot actually had children screaming from the room, because it was so real. It was so much like a creature. So when we embarked on the Simon project, we started with some of those learnings. And we set out to set expectations so that Simon would really be somewhere in between being an appliance and being a robot that you talk to that has a face. So it was a pretty interesting challenge. And this was our starting point. So this was the bare bones of what we were going to have with Simon. We knew that we were going to have cameras in its eyes and microphone ears and the ability to move in many ways, including nodding the head, linking the eyes, lifting the arms, bending the torso. And so as a designer, I began with a really exhaustive search into robot aesthetics and working with Dr. Tramaz and really plumbing her knowledge and her expertise to say, you know, should it be more like this or more like this? I always say that designers a little bit like an optometrist. And we keep refining the vision until it gets clearer and clearer. So these were some categories that I had put together for my research around what the aesthetics of the robot should be. And so there were these mecha robots. There's this whole area of human clone robots. There are these soft skin robots. I've got a beefcake category. But the one we really settled on was this friendly doll, which was really this balance of an appliance, you know, like a washer, dryer, dishwasher, and a machine that you talk to. So I began by doing lots and lots of sketches and thinking about should it have a helmet? Should it have ears? Should it have hair? What should the proportions be? And there were many, many other sketches, which eventually led to renderings and 3D prints. And the final robot looks something like this. Originally, we had a vision that it would also have eyebrows and a mouth so that you see these magnets. We had a unique system with channels behind the face. But this is what Simon, at least the head shell, we have some body shells that we're continually trying to add to the robot. But this is a really wonderful portrait that Daniel Boris took for the New York Times. And it's my favorite pose. So these are some of Simon's expressions. When he's interacting with people, he can grasp objects. He holds them up to his eyes. I'm going to show you a little video, and you'll see a bit of this interaction. So in this video, Simon is being trained. Simon can listen, and he can actually parse the words of sentences and listen for a color. And in this exercise, he's listening for a color, and he'll remember the color. So I'll just play this video. So there are subtitles, because unfortunately, the audio, this was at a conference. So there's someone speaking to him, and she says, Simon, take this. And Simon says, sure. And he has pads in his hands, so he can hold the object. And here he is, holding it up to his cameras, which are the eyes. And he's already been trained to understand red. So when she says, where does this go, he says it goes in the red bin. And he can actually look over and find the red bin and properly place it in there. Now in this next one, he's not seen green before. The robot was reset. So he does this gesture. This is, I don't know, green. So then Simon can be trained. She says, this goes in the green bin. And Simon says, OK. And now, he'll put together the word green and what color he has seen with the camera eyes. And he'll know green from now on. So this is how Simon learns. So he'll properly place it in the green bin. So following this project, I've continued to work with the Socially Intelligent Machines Lab. The most recent project that we did was Simon's cousin, which was an interesting design challenge, because I was tasked to figure out what a familial resemblance means for a robot. And Curie, the name was crowdsourced. Her name is Curie. She was actually just recently on the cover of Popular Science in November. I was concerned about the mouth and eyebrows. So this robot actually has a little bit of a mouth, but just a subtle one, not to affect the expressions. And round ears as opposed to capsule-shaped ears. But for me, the real vision in this is that as a product designer, I see us bringing these magical moments into our everyday objects. It's not so much that I see necessarily eyeballs and ears in this whole really elaborate machine, which is remarkable, but not necessarily applicable in all situations, but I see it in these subtle moments. So for example, when we were designing Simon, we knew that we wanted ears, because that was part of Dr. Tamaz's research, is that ears could be very expressive, and we know that from animals. So I said, well, could we put a light in there? And she said, yeah, let's talk with. We had a third partner, Jonathan Holmes, who is an engineering partner. And she said, yeah, I don't know what we'll do with it yet, but it'll be great to have another element of expression. And what happened the first time that I actually met Simon? Now, I knew Simon intimately, inside and out really. I knew every part of him. And I knew exactly what to expect. But this really magical emotion took over the first time I interacted with him and handed him this object, and could see that when he sees the object, the PhD students had programmed it so that it would turn the color of the object in his ears. So this moment happened. And I just felt like, wow, that's how I would love my objects to interact with me. They should just know that I'm handing it, and I want it to sort it into a color. And it was almost like I could see what Simon was thinking, and Simon knew what I was thinking. It was like, wow, this amazing moment. So I started thinking hard about, as a product designer, I really need to bring some of those moments in subtle ways into the objects that I'm designing. So I wrote a piece about it for the New York Times. It was on the cover of the Sunday review in January of 2013. And as part of the article, I was invited to do a slideshow video. And since I work in the future, sometimes I have to admit, I don't really know the future, but I have some guesses. So I like to take a little playful approach. So you'll see the slideshow is a little bit tongue in cheek and definitely playful. But I'll show you a piece of it. I'm Carla, and this is a day in the near future. Another Monday morning, and my lamp has just gone from dim to bright. I try to turn over, but it rotates. I stumble into the bathroom. I brush one side for a while. The toothbrush vibrates. So I know it's time to switch to the other side. Now I'm really awake. The bathroom mirror says, nice job on your weight. But your heart rate is a little higher than yesterday. As I head to work, my bike confirms my appointment. On our way to Ted's office? Yes. When it's time to turn, my handlebar vibrates, and my jacket lights up to signal a left turn. At the meeting, I draw some sketches with my memory pen. When we're done, it emails my notes to my colleagues. Back at home, the door recognizes me. The lights turn on, the stereo starts playing, and my 3D printer whistles. Our talking walking objects, you can see the video. But one of the most remarkable things to me was how people reacted to the article and the video. And there were 83 comments. And I looked, and I had never done an op-ed before. Bought comments. And I started reading them, and I started getting really concerned. And so one of them said, the day that Simon the vacuum starts tugging at my heart strings is the day I get back into therapy. What a pointless world we humans lull about as in Wally, except our parasites will remind us to exercise and eat less of that gourmet food prepared by our countertop sous chef. This one goes on and on. But personally, I think of having a robot as a friend is an extremely disturbing notion. Empathy and compassion are not going out of fashion. They will be directed at inanimate objects. And he goes on and on. And there were more and more and more of these. People got increasingly angry. This is my favorite. This is a fellow who actually always writes a little poem when he writes his comments. Larry Eisenberg, a robot your pet and friend have humans come to a dead end. Empathy, compassion are now out of fashion, a sad and ominous trend. So I started thinking, oh, I'm going to write. So I went to a coffee shop. I started writing a whole response to this. But then actually, I noticed that the dialogue kind of started settling itself. And a lot of people started writing back and saying, hey, you know what, I think this is very valuable. And there's a lot of really great thought that's being put into these products. And I started seeing comments like this one from Joy Jay in Atlanta. And the part that I'll highlight is that she says, as a member of the subset of our population we have come to call disabled, I'm excited about each and every development that may seem superficial as they're being developed, but which people with a variety of physical and mental challenges will come to rely on for an improved quality of life in the future. And then there started being more of a dialogue in this very positive tone. This is another fellow who says, well said, Joy Johnson, as a physician who cared for disabled patients. I would really support this. And then there was also, I came here to say, laundry brats, that's all, bleep blurp. So I kind of like that. So I continued to think, okay, really this is my mission, robotics in our everyday lives. So how can we bring these subtle moments in? And there are three main things that I like to focus on, light, sound, and motion as a way to communicate with us in human ease. For example, a Mac power book and some of the older models actually pulses at 12 pulses per minute, which is the rate of human breath. Apple actually has this patented. I mean, they know what they're doing and they know that we respond to this. Jawbone has a menu that speaks to you. I have mine speak to mine, me and the Italian actually. And it will say your battery is low, you're not connected, the Bluetooth is connected, et cetera. And this is a Polycom, it's smart design. We had this as our video conferencing system. And it does this subtle thing, where it'll actually, when it's not in conference, it'll turn its head and kind of hide away. So I love that. And I decided I would bring that into anything with designing and they would always be living objects, expressive, listening, connected, context aware, collaborative, charismatic. So when I joined smart design, my first project was a robot. And I was really excited about it and I was thinking about all these things I had learned from Simon and had also learned from studying, there were a number of studies that had been done actually at Georgia Tech about how people accept the flaws of a vacuum cleaner that's robotic more than any other flaws. And there were lots of real subtleties with how people were feeling connected and naming their Roombas. So I brought this to the CEO of the startup. It was called NETO, Robotics, and said we have to give this robot a personality. So we started thinking about lots of different ways, like what would a robot vacuum cleaner's personality be? Essentially we modeled it after Gromit. He's gotta be kind of quiet, really clever, and not make you feel stupid. And we set out to design, I had a great team that I worked with. And I knew it was important to think about context. And so these are some of our sketches laying out where it would be, mapping out a room, thinking about the fact that it would be at a distance so it would have to be an interface that you could see from far away. It would have to be able to speak to you maybe from another room to let you know that something was wrong. We thought about these key moments of interaction. And I really worked closely with the robotics engineers at NETO to ask questions like would it be able to know the difference between a piece of furniture and a person? And he said, well, yeah, actually, the way it maps out the room, it could figure out the difference. And so I thought, that's great, we'll know it could say hi if it sees a person in its way and then go about its way. So we really built the interaction around all these sort of subtle moments and we tried to create this human ease. So we did a lot of studies of how, there are moments when facial expressions were something that we wanted to actually incorporate in the robot. And I worked with a composer named Scooby Loposky and we actually took, translated these moments into sounds. So non-verbal, but still human, expressive, understandable. So I'll play a few of these for you. That's when it lets you know I'm off to work, I'm cleaning. And that lets you know cleaning's done. That's the hello. Oh, hello. That's when it's trapped. That's full. This is when it just wakes up in the morning. Not in the morning or when you turn it on. And this is when it goes to sleep. So the sounds are part of it. There's a, first vision has a light that shines through and you can see it here in this prototype and this is a little prototype. I really like to build things physically to get a sense of how they're going to feel. You don't know until you actually feel it humanly. And we were going to have soap bubbles. We didn't do that. So this is just a part of the logic that was built. And then this is the vacuum that's on the market right now. The interface is smaller, but it's still in the works, in the vision to have that large bright that you'll see from across the room. So with that, thank you. So I'm going to talk to you about from the perspective of a robotics engineer and how design fits into that directly. And so my focus is going to be on individuals with disabilities. And primarily why this is an interesting target demographic is that guaranteed everyone, if you live long enough, will have a disability. And why is that? So the way you define a disability is a change in your normative function of life. So if you were a tennis player at the age of 18 and you still want to play at the age of 60 but don't play so well, if you can have an exoskeleton that can help with your tennis swing, that's actually an enhancement of your quality of life. And so one of the things is that by designing robotic technology that understands that as a concept, it means you design a technology for everyone. And so I'm going to talk specifically about these types of robotics and the way that they work and the way that they interact. And so when we think about healthcare robotics and these robotics for everybody in everyday life, we can think of it as a range of environments. We would like these robots to function in our home but as well as in the school setting, as well as in the hospitals in any of these domains. And so if you think about design, you have to think about where's the environment that you want these robots to actually go into. The other thing is the population. You have the typical user, you have those who have disabilities, you have the elderly, you have young children. All of these target demographics are potential users. A robot has to be smart enough to adapt to who they're interacting with, whether it's the parent or the child in the same home. And so all of these concepts have to be included when you're thinking about design. I'm going to talk a little bit in this talk about the role of assistive therapy in design and robotics. And so what is therapy? So therapy is primarily a repetition of doing some type of activity. And so people go through physical therapy, those who are born with a disability, a child, they might go through occupational therapy over and over again. It might be you have a sports injury. So you play baseball and you get hurt, you might actually go through therapy or you had some injury, car accident. And so therapy is not just for those who have disabilities, therapy is for anyone who needs some aspect of repetition, repetition. So robotics are good for that. If you think about what is a robotic good for, it's like things that are repetitious. You do it over and over and over and over again. So this is a nice design for robotics. The problem is that you have the human on the other end. So how do you design repetition, which a robot is very good for, with a human where they're dynamic and they're different and you have children and adults and elderly and you have a buff player or you have a very weak child. And so all of this demographic, we have to think about that. And one of the things is that there is a need for this. There is a growing need, especially since we're all here in the US, there's this issue with healthcare. This is a growing need. We wanna stay in our homes at the age of 80. We wanna be able to stay in that home and not go into a nursing facility. So robotics, a robotic assistant, a robotic socially interactive assistant can really help with that. So what are the challenges then? So you have a robot as a robotic engineer. I can design a robot that's very functional and does exactly what it should do. It can vacuum the floor. It can pick up things from the floor. It can fold the laundry. But now you have a human in the loop. And when you have a human, if you think about therapy, there's three different target demographics. There's the patient, the child, someone who's elderly, someone who's recovering. You have the clinician who's actually given the direction or the surgeon or the doctor. These are two different types of people and yet you need the robot to be able to do this. And so when we think about design and we think about robotics, we bring in the user from the very beginning. We bring in the patients from the very beginning. What kind of uses do you see? When would you use this robot to actually do a therapy? Would you be scared of it as a child? Would you run away screaming? Or would you see it as a friend and say, okay, I'll do therapy with my robot playmate? As a clinician, do you trust this robot? Do you trust that I can send and tell the robot this is what I need you to do with my patient? And you trust that robot that the robot will do exactly what you said and then report and not lie. And so this whole aspect of robots that are deceptive and lying, this is a whole area of research. As well, because if I have a child, does it hurt? What is the robot supposed to say? Yes, is the child gonna do that? But that's a form of lying. The clinician, did the child do the exercise? Yes, are they lying or not? And so even designing things like emotions and deception and lying is part of the demographic. And of course you have to make sure the robot is engaging at all times. And so what I do in my lab is we focus on those children who have disabilities that go through therapy. And so if you think about children, most of us were one at one point. And so we kind of understand what it is to be a child. Imagine now, you have a child with a disability, either acquired, so you have a teenager that was playing sports and unfortunately they got hit too hard, they have a traumatic brain injury. Or you have a child that was born with a disability because they were a preemie and you have a child with a disability. And so this is a target demographic that unfortunately is growing because our technology and our medical teams are so great. So we're having a lot of children now that are coming up and growing that have a disability. And so I'm gonna focus on children, one because they're unique. One is they don't follow directions. Children don't follow directions at all. They try sometimes, sometimes they just, it doesn't make sense. If it doesn't make logical sense, why should I do it? Children are also unique because I say if it works with a child, it will work with an adult, guaranteed. And so by focusing on children as a target demographic, it means that when you're designing the robotic technology and you design it so that it works, it will work for pretty much almost anyone. And so children are a really good challenge in the robotics world. And so the objective with children, if you think about a child, one it has to be interactive, it has to be friendly. I mean these are children, they're used to playing, they're used to understanding and you have these vibrant toys. And so your robot has to be engaging and friendly and interactive. It has to evoke a certain response from the child. But that's on the child point of view. On the clinical point of view, the robot has to be very strict, has to be able to take directions and give these directions and the clinical person or the doctor or the surgeon has to know that the information that the robot is giving is true. And so you have to have this very strict protocol and be compliant and understand HIPAA and things like that. And then it also has to address these issues of it's a child after all. And so what we design and how we design our robots is we design robot playmates. So all kids love to play. It's a nature, it's what they do. They love to play. In fact, that's why there's a whole toy industry about making toys for kids. You're born playing, you're born because it leads to curiosity and expansion. And so we design robots that play in order to do therapy. And so what we have here is, this is a pleo. And so one of the things I like to do is we like to go out and buy things and hack them. So there's a lot of money goes into designing these. And so why not take something that's already there and hack it for our function? And so this is our dinosaur. And so what we did is we actually hacked a nunchuck to control it. And so the whole goal was this, is one thing about therapy is repetition, repetition, repetition. And so one of the aspects of is, if I have to use my arm over and over again, think about when a cast, you're not gonna use the arm with a cast, you're gonna use the one that works. So for a child with a disability, you're gonna use the one that feels most comfortable. So how do you force a child to do something that hurts? Well, why don't you give them some incentive? So what we did is we basically say, you can't play with the robot unless you use the side that you have to exercise with. And these are all the things that you can do. You can have it make noises. You can make it sounds. And so we would have dance offs with our kids. Basically, okay, whoever gets the best dance off, that means you have to move your hand. Means you have to move your wrist. And maybe it does hurt a little bit, but look at what you can do with this. And so you keep going and keep going. And whatever the ability of that child, they're doing it. They're making the robot's dance. They're making the robots make these sounds by hacking a nunchuck to a dinosaur robot. And so this kind of interactive play. And so at the end of the day, you're like, yeah, but you're playing with robots. It's like, yeah, but they're also doing therapy. And so that's really combining play, which is the natural child inclination, this whole aspect of designing a robot that's very functional. And so one of the things we had to do, which I love doing research in this domain, is what is play? What is the definition of play? How do you design play in the robotic sense? When a child is playing, what are they really doing? They're exploring their world. They're learning. The neurons are firing. They're looking at what's going on. They're looking at cause and effect. So how can we get that natural curiosity and have that robot evoke that same response? So a lot of what we do is understanding at the very beginning is what is play? It means that we watch a lot of YouTube videos of parents uploading their kids' videos, playing to understand, okay, what are they doing? How are they interacting? How does one child interact with another? How do you do rules? So if I give a child a set of blocks and I say play with them, do you know that 80% of the children will do the exact same thing? They stack. We don't tell them this. They stack. If we give them a bunch of different objects and we put a big bin and we say play, you could do anything you want, what do they do? They start putting in 80%. 20% start throwing, 20% will hit, but 80% of the kids do exactly the same thing. So understanding what play is is really about observing your participants, having kids come in and just saying, here it is, what do you wanna do? And that information is free. You can take that information then and the robot can say, oh, I understand what play is. I basically observe, I model what the child is doing, and then I basically interrupt. I'm gonna butt in and play with this child just as if they're used to. So now what you're doing is you're designing the robot to function as if it was natural. It's a natural inclination. The robot stacks blocks. Oh, I understand that because the child is stacking blocks. And so this kind of interaction is so important. And so some of our play projects. So I'm gonna show you a few of the videos. So this is the cool part and explain some of it. So this is Darwin. And so Darwin is the one I'm showing here is designed to encourage gate movements. And so a lot of the kids that we have, I'm gonna mute this a little bit. So a lot of the kids that we have because they have disabilities have issues where they're trying to get their gate to be more periodic. So basically they need to practice walking, practice walking. So the best way to practice walking is you walk. I mean, that's the way you practice walking is you walk. So how do you get a child to practice walking? You have to engage them. And so what we have and one of the things that because we're bringing these children in at the very beginning, one of the observations we sell is that if we make a robot just a little bit stupid, you can actually evoke behaviors from the child that are amazing, one that empowers them. And so as an example, if I have a robot and I'm just kicking the ball back and forth, child's not walking, child's using their dominant side, they're kicking the ball back, the robot kicks it back, they're kicking it back. So we're involved, we're excited, we're playing this back and forth game. And guess what? My robot makes a mistake and kicks the ball to the side. Silly robot. I'm gonna go get it for you. So the child then will walk and kick the ball back and then we're engaged back and forth, back and forth, back and forth. And then oops, silly robot, I'll get it for you. And so what we do is we take that, again, involving them into the interaction, where they are engaging them. So they're having fun. And then you change the script just a little bit. And what happens is it's not even therapy anymore, it's part of the interaction. It's just a way of going. And so what we found in with our studies is we can create a therapy session and they will keep doing it and keep doing it. And it's like play, even though at the end of the day, if you look at how much they're walking, they're walking, they're actually moving around. So the other thing we have is mimicking. So this is a Simon Says. And so it's the Georgia Tech logo, of course. And so Simon says, our kids love it, our parents hate it. So Simon does a mimicking. And so what is mimicking? Mimicking is basically look at me, look at what I can do. You know, can you do this? I can do it better than you. And so what this one is, is upper arms. So Manoy, the robot Georgia Tech Manoy just moves his hands up and down, follow me, mimic me, do some things. Now, not very capable in terms of joints. It basically can move his arm up and that's about it. So how do you evoke a mimicking game with the robot that has very limited capabilities? One is you use aspects of voice. You use things like can you do this? Can you do it a little bit higher? I can't do it that high. You're doing really well, maybe a little higher than me. So what happens is I take a function where the robot actually can't do it but I can use voice, I can use inclinations to basically encourage the child. And why the parents hate it is because it can get kind of annoying. So if you have this robot that's saying, look at me, come on. And we use a robotic voice, you'll hear it in a little bit. After a while, you're like, do we have to do another trial? Can you just take it back with you please? And then the last one is we've been exploring the use of music and sound and how to sound evoke emotions. And so what we've found is that most children learn the same kind of songs over and over. If I, you know, row, row on your boat, everyone pretty much knows that. Well, if you pick out a child five years old, they kind of know the same songs. Even if they went to different schools, they went to private or public, they kind of all know the same songs. And so we use that whole aspect of songs and familiarity to evoke that. So our other project is we use video games as well. So this is Darwin and what Darwin has learned how to do is play a game of Angry Birds. And so on the other side, so here's Darwin practicing. So why video games? Well, unfortunately we are in a world where video games are the norm. And so tablets and in fact, there was a study that said, you know, by the age of two, 70 some percent of kids are users of smartphones or tablets. Okay, so this is the world we live in. Let's take advantage of it. And so what we do is we have the robot. This is it's learning. And again, it does a little bit of a, oh, I don't know how to do this, to teach and encourage a child to do Angry Birds. And so, so everyone knows. That's it, I understand. No voice. Now you knew exactly, ooh, that means he must not have done so well. So what we've done is we do, so a lot of things we do is mimic how people do. We learn, we observe. And so what we found, again, this whole aspect of engaging a child where they are, we found that if the robot learns, but doesn't learn quite well, every so often again makes mistakes, the child will actually continue playing and teaching this robot over and over again. And so basically, if you think about Angry Birds is touch, swipe, touch, swipe, swipe, touch, swipe. Which, I play Angry Birds, but I got about five minutes and I'm done. So what we've seen is that when you have that robot there, if they're touching and they're swiping and Darwin does it and then doesn't do, child is always interacting. Touch, swipe, touch, swipe. And so, and so we can have this whole engagement where the child and the robot interact together in terms of continuously, continuously continue. So I'm almost done, I think. So the other thing is that we love robots and robots, we love them in every home, but our robots don't go in every home. And we have these video games. So not all kids have, if they have fine motor limitations, we actually have some devices that allow them to interact, but some of them, this swiping is a difficult. And so how do you have Darwin interacting with a child with some movement that maybe not that fine motor control? And so we designed something called Darwin Super Pop. So basically, we have again a video game, but we actually take any fine motor skills. So we use the connect, takes any fine motor skills and that gets translated to playing a video game. And we have Darwin encouraging just like it. And so this was just some of the results of moving with us, saying it. So our trials, we've had two kids and we already know the issues because again, we bring in kids from the very beginning, but the whole aspect is Darwin, we found out, could be really irritating if you're not doing too well. And so what does this mean? So if you are not able, or you are tired, or you are just like, I'm bored with this game, if Darwin is there saying, come on, come on, come on, come on. After a while, you get a little frustrated. So that has to be fixed because we want this to be engaging. And so not every child is engaged by robots. Oh my gosh, but not every child is engaged by robots. And so understanding that concept and designing around that is important. And so lastly, that one of the things we do is not only do we engage these kids on the therapy level, but we also engage them in the education level. And so we design robots and we design interfaces to teach kids, children with disabilities, how to program robots. So we're turning them into computer scientists because one of the things that we found is that children with disabilities have a unique way of problem solving. And it's because throughout their life, they've had to live in a world that isn't really made for them, that they've had to figure out, okay, how do I get up the stairs of a place that's not accessible? And so one of the things is by giving them the skills, basically programming their robots to do things, we've actually had some amazing results about how do you design a, we give them the same thing. Design this, solve a maze, program your robot to solve a maze, program your robot to play kick the cam, program your robot to draw. And some of the results are amazing in terms of them designing solutions that we wouldn't even think about just because of their perspective. And so really looking at how do you take this demographic of children with disabilities and turn them into designers of their own solutions? I think it's a wonderful idea. So thanks to my collaborators, I work with a bunch of clinicians, another story, totally different language. I'm an engineer, classically trained. And again, by bringing in participants from the very beginning, it also means that as a robotics engineer, you have to learn how to communicate in a different language. And they understand now, my clinicians can actually talk robotics, which is amazing to hear them. And I'm done, thank you. So some great talks. So I got a couple of tough acts to follow. I'll try my best. So I'm gonna talk a little bit more, along the same lines of how robotic technology can change lives. I'm gonna specifically focus on integrated robotics with the human itself from a prosthetic standpoint. So how do we think about how robotic systems can be actually integrated with the human body and actually be worn by people and used as a functional part of their being? Really quick, I'm part of the Johns Hopkins Applied Physics Laboratory. The Applied Physics Laboratory is a university affiliated research center and we're a trusted agent of the government. So the government can come to us and basically bring these very hard technological problems and we bring them an unbiased solution. So we're a non-profit and we're affiliated with the Johns Hopkins team. So we do everything from spacecraft. We have a couple of spacecraft going around Mercury right now. We actually have the New Horizons, some instruments on the New Horizons. Spacecraft is taking pictures of Pluto, so you might have heard us in the news recently. So to give you a little bit of context, let's start analyzing the origins of prosthetics. So here's a very old artifact dated about 950 to 7 to NBC and it's a prosthetic toe. Now this was worn by an Egyptian woman and this was discovered in Egypt near Luxor and this presumably was purely for cosmetic reasons, but just to give you a sense of the age of some of these devices. Here's an actually a replica model of what's known as the Kapua leg and it really was sort of bringing to life the initial concept of an artificial leg. How could a prosthetic device be used to regain mobility from lower limb loss? This was discovered in Italy. Here's an interesting early prosthetic from the 1500s and it almost seems to take the appearance of an extension of a suit of armor. And in fact, the user of this device was a knight himself and he had lost his arm and what he wanted to do was to be able to hold his lance or hold his sword or even maneuver his horse around again. So he wanted to maintain that functionality again so he had a device designed that could sort of be part of his equipment so to speak from a daily living standpoint as well as be functional. Some later ones, late 1800s here on the right, sorry on the left in your perspective is one that was designed for a piano player. Now it's sort of a fixed position of the hand but it almost takes like a piano-centric shape. So it allowed a piano player to interact with the keys effectively even though there's been some upper limb loss. And then the late 1800s, there's this metallic one, a Victorian-era replica where it really kind of touches on the almost the skeleton-like nature of an underlying prosthetic device and we presume that there might have been some sort of cosmetic covering or otherwise to cover over that or maybe not, we're not really sure. In 1912, we really started to see the onset of really kind of functional devices. One of the most prevalent ones is a patent by DW Dorrance which is a split hook design. And interestingly enough, the split hook design is even used a lot today because it's very simplistic and it actually gets the job done. It's very sturdy, it's very durable and it's basically a no-nonsense means to an end for prosthetic users. And so thinking about how 100-year-old technology is actually still being used today, it's quite remarkable. About midway through the century, you actually start seeing some more robotic devices. So this is when the onset of actual articulated devices started to come into play. And so here we have a couple of articulated hands that had myoelectric inputs. The one on the right specifically was the first one that was kind of fully integrated such that a user could wear it around and functionally use that device. Taking a spin now away from actually limb amputations and focusing on other type of prosthetic devices, here in 58, we have the first artificial pacemaker. And so thinking about how prosthetics can actually be integrated inside the human body and not just external devices that we interact with. 1961 brought, now this picture isn't from the 60s, but this is sort of representative of the device in question, this is a cochlear implant. And what this touches on is how these devices can actually be integrated with the human brain. Cochlear implant measures sound in the environment and it can transduce that sound in a mechanical or electrical stimuli. They can actually get stimulated, they basically, those signals get transmitted to the inner ear, to the nerve bundles in the inner ear that then make their way to the auditory cortex of the brain. And so you can basically allow somebody who's deaf to hear again with the cochlear implant. In 82, we see the Utah arm which is representative of the first sort of full limb powered prosthetic device. It has a fully articulating shoulder and elbow and a hand that can open and close as well as rotate through myoelectric input. So myoelectric inputs are measuring surface voltages on the skin based on underlying muscular movements. In 92 comes along the microelectrode array and this was basically designed by Richard Norm at the University of Utah and some of his collaborators. And what this device allows people to do is measure the actual neurons of the brain firing. And so this device in question is about three millimeters on a side and it's really good at measuring brain function. And so you can implant this device in the human brain and you can measure activity at the neuronal level. And then in 2005, going back to lower limb amputations, functionality is severely improved for lower limb amputees with the advent of powered knees where people can actually do powered walking and powered gait mobility again without having to use sort of fixed lower limb prosthetic devices. So that kind of brings us into the work that me and my colleagues are involved in called revolutionizing prosthetics. This is a defense advanced research projects agency funded program DARPA. And what they were really interested in is maintaining functionality of wounded warriors. So the wars overseas were bringing back a lot of warriors who had upper limb amputations and lower limb amputations. And that was primarily because of the body armor technology is so good these days that improvised explosive device blasts end up with a lot of upper limb and lower limb loss. And so the Defense Department was like, hey, we got great soldiers they're willing to sacrifice their lives for our country. They come back without limb loss. Let's restore some of that functionality and kind of kickstart prosthetic technology sort of to the next level. And so that was the genesis of the revolutionizing prosthetics program. So in 2005 DARPA set up two teams. One of the teams, DECA, which is the creator of the luke arm, their charge was to create a device that could make it to the market as quickly as possible. How can we create an advanced prosthetic device that can get to the market as quickly as possible through an FDA approval cycle? And so the luke arm is pictured there and that's controlled through basically an external interface. It's got foot controllers, inertial measurement sensors on the foot and the user jogs their feet up and down and back and forth and they can move this prosthetic in a variety of fashions. At APL we were charged with sort of pushing the state of technology. Could we make the most advanced prosthetic imaginable, right? And not only create that device but control it through these neural inputs. How do we envision a prosthetic system that can be controlled by basically thought alone? And so that was our charge and so we embarked on that in 2005. Now the key to this whole thing when you think about neural prosthetics is the human brain. So we were thinking to ourselves, okay, we need to find the best possible subject available. And so clearly we look to Homer Simpson. No, I know I kid. When we think about what we're after, when we look at an actual x-ray of somebody's head, we're talking about implanting these micro electrode arrays in their motor cortex, which is a small strip of matter on both sides. Your right hemisphere controlled your left side, your left hemisphere controlled your right. And these implantable arrays are really good at listening to the brain signals going on, right? And so remember they're really small but all they need to do is listen to a small subset of the neurons. And we know where to locate these things because before we implant, we actually do some imaging technologies, functional MRI, where we have the users observe or imagine upper limb motions and areas of their brain light up with a lot of activity. And we can place these devices in those locations because we know that there's a correlation with upper limb function at that point. So what we get out of these devices is a whole symphony of electrical activity. And just the recording of those things, or if you listen to it, it just sounds like noise. Kind of like when you turn your stereo on or your TV and there's no signal, it's kind of like a fuzzy noise. But what we can do is we can train neural decoding algorithms to listen to that noise and actually pull out correlations between the underlying data. And so we can look at preferred directions of neurons, they call it, where when a user observes the limb moving, there's a certain subset of those neurons that are very active. And so you can develop classifiers to, in real time, as they see those signals, they know, okay, hey, let's move the arm left and right up and down, and so forth. Other decode side. But we're also focusing on the sensory encoding side, where as the limb interacts with the objects, it's outfitted with a ton of sensors over the hand and fingertips. So how do we take that information and actually encode that in a manner and actually directly stimulate the brain back, right? And so that when the user moves the limb in space, they actually get a true sensation of touch and they get a true physical embodiment through that limb system by the sensory perception being fed back to them. So here's a video of the modular prosthetic limb. In this case, Matt Perry is wearing a motion capture system on his arm, and this is just kind of highlighting some of the motions that this arm can go through and how naturalistic and human-like it is. So there's 17 motors in this system that control 26 articulating joints, which happens to be about too shy of what the human arm and hand can actually do. And you'll watch as he manipulates this football here, there's a four-motor thumb, so basically his thumb is basically controlling that thumb almost in a one-to-one fashion. You can look at the dexterity of the thumb, which is actually a key enabler to actually human-like hand dexterity is a lot of it's found in the thumb itself. Here's some simple tool manipulation, which is kind of ironic, taking a high dexterity robotic system and distilling it down to a wrench, right? So, but that just kind of shows some of the dexterity involved in tool manipulation and closed manipulation and things like that. So some of the key design challenges associated with the limb were packaging, right? How do we cram all that functionality into the space that has to be the size of the human arm because that was the charge. It has to weigh and it has to look and it has to behave like the human arm, right? So one of the key design considerations of that was to cram all the actuation into the hand, right? And the reason that was done was because a large percentage of upper limb amputations are at the transhumor level, the forearm level, and more distal, right? So a lot of current high dexterity robotic systems are based on tendons and motors that are more proximal, really a lot like our human anatomy. We have a lot of musculature in the forearm and we have tendons that actuate our hand. But the problem with that approach in this standpoint is because we would immediately eliminate a very large percentage of our use population by doing that. So that forced us to cram all the actuation into the hand itself so that we could accommodate hand transplants. So in the hand itself, we have the central processor, we call it the limb controller for the MPL, as well as 10 motors that control 19 articulating joints. So it's an extremely dense and highly functional device. And obviously another key reason why we made it modular was to again accommodate various levels of upper limb integrated battery packs for higher level amputees and that battery pack can be either worn on a belt pack or wrapped around what's called the socket which is the interface between the limb and the patient. It can be a conformal battery can be wrapped around the socket. So one of the applications that we look at is again this direct neural control paradigm. So this is a video of Jan Sherman. She's up at the University of Pittsburgh. She was recently ex-planted about three or four months ago but this video was shot about a half a year ago and what she has is she has two of those micro electrode arrays implanted in her motor cortex. And she's a complete quadriplegic. So she had a neurodegenerative disorder that allowed her or basically made her disabled from the neck down. So she is now paralyzed. About 20 years ago she had the onset of this disorder. And what she does is she observes the limb going through some general motions and she trains up that neural decoder. And then after that she's actually basically given free form ability to control the limb just by thinking about it. Think about that for a second. So all she really has to do at this point when she trains up that neural decoder is think about moving the system. So here she is feeding herself for the first time in 20 years. And that's really all she wanted to do. She wanted to be enabled. Going back to what was discussed earlier about a disability, she wanted to be re-enabled to feed herself. You or I might take that for granted from a day to day but for her who's completely world child bound and has no mobility, this is a huge step for her. And so she spent her time really helping us understand the brain-machine interface and how these devices interact with the human body, how can we extract signals from them and control some of these prosthetic devices. I'm gonna skip this part of the video because I wanna talk about it next. So she was actually really enabling us to sort of understand her but as well at the same time help her, right? So it was really sort of that symbiotic relationship between the design team in creating this device as well as the application but also understanding her needs and helping her achieve things that she wanted to achieve. So in addition to these advanced cortical implants we integrate the MPL with patients who actually use more conventional technologies thinking more about surface electromyography and to refresh your mind again, so surface electromyography is again when we move our muscles you can stick electrodes on the surface of your skin and you can measure this activity, right? And you can actually make a pretty good picture of the underlying musculature from that. So Master Sergeant Joseph DeLaurier was a wounded in Afghanistan. He was a triple amputee event. He's a silver star recipient and he actually is testifying in front of Congress as to how this technology is actually impacting his life for the better and in a positive way. So we worked with him on a number of occasions where he was sort of wearing a, he's a transradial amputee on the left side. And so the next, we're gonna come back to him in a second, he's in the next video. On the right here is Les Baugh. Les is actually a bilateral full shoulder amputee. The thing about Les is, and his patient classification is they actually stand to gain the most when you think about it from an amputee population from these devices. If this technology over the next few years proves to be as effective and easy to integrate and usable as we hope it can be, right? They seek to serve the most because of their sort of their disability or their condition and their debilitation that is associated with that. And so Les has become sort of a bit of a YouTube sensation so you can check him out on YouTube. I think he's got over like two million hits or something. And it blew up over the span of like a week. So check him out. And then there's Johnny Matthew who's one of our oldest in time duration patients who's a transradial amputee. Both Les and Johnny have actually undergone quite a remarkable surgical procedure called targeted muscle re-innovation. So when you think about controlling an amputee with surface EMG signals, at the transradial level, it's a much easier problem because the muscles are still there that used to control our hand, right? So if you wanna control a high dexterity hand and we have the muscles that you can, we can actually do a fairly easy correlation to motion. But as you start to lose the arm more proximal to that, you lose that musculature, right? So then you start thinking about how do I use the biceps and the triceps to move a hand? It doesn't really make sense for a user, right? So at the rehabilitation institute of Chicago, a guy named Dr. Kiken invented this procedure where you actually take those residual nerves and you can re-intervate them into other musculature of the body, like the chest and the back, right? And so when the patient thinks about moving their hand again, those muscles fire, right? As if their hand were moving. And so you can stick electrodes in those locations and you can actually get a more of a naturalistic control paradigm for these systems. And so Les and Johnny both underwent that procedure and it's actually tremendously helping their ability to control these devices. So let's go back to Joe again. So this is Joe down at Walter Reed National Military Medical Center. And so he's basically wearing a trans-humeral, or trans-radial socket with what we call dome electrodes on the inside measuring surface electromyographic signals. And so what we've done is we've trained up some classifiers to look at those signals and then come back to Joe. And to Joe, if you ask Joe what he's thinking about when he's using this device, he's very natural to him because the way the system is designed, it sort of is trained based upon his naturalistic movements, right? And so he does a number of functional assessments, not too dissimilar from Jan, functional assessments to one, quantify how well he's performing within a device like this. And two, figuring out how we can improve upon the design of the system, the human interface and the characteristics of the system in question to make it better and more usable for the patient population. And so what we have going on at Walter Reed is a 12 patient set right now. The MPL, I didn't mention this, but it's still a very prototypical device. It's not cleared for any sort of medical use by the FDA. That's probably further down the road, but we're doing a limited case study at Walter Reed where we're examining 12 patients, mostly all of them wounded warriors, and measuring sort of their proficiency and effectiveness of controlling this device. So then also alternatively to direct prosthetic applications, let's think a little bit about alternative technologies. And so we can think about alternative technological applications of the MPL itself. This is RoboSally. You can find RoboSally on YouTube as well. And we use RoboSally to engage upon what we call human capabilities protection or robotic telepresence basically. How do we enable a robot to act as like a human surrogate in environments that are maybe dull, dirty, or dangerous? Maybe there's a scenario where you don't want to send a person down. You can send the robot down, and if you can control that robot effectively, it can be as capable as the human, right? And especially if we leverage some of the dexterity in these hands and we leverage it effectively, that really is just an enabling technology to do this more effectively, right? And so we do a lot of independent research projects where we focus on how the limb technology can also be transitioned to other use cases, specifically robotic telepresence, but other areas. How do we also take a step further back and remove user burden from it such that you could control this robotic system through suggestive courses of action? Maybe you can say, hey, go in that room and grab me the red object, right? Instead of having to directly control every motion in this system, like we're doing here, kind of like a puppet, how do you sort of give that robot more autonomous functionality? And so I should wrap up, because we're actually running out of time, but in closing, I just wanted to thank everybody here for having all of us speak. I wanted to thank all my colleagues at the Applied Physics Laboratory and the other partners that we work with at Pittsburgh HGT and Caltech for being part of this amazing team to help bring sort of robotic prosthetic reality to sort of the 21st century. Thanks a lot. You can have that one. Okay. You want me here? I can't take that one. You want me down there? Yeah, I want to sit here. Oh, okay. Oh, I didn't like it. So that was so cool, and we have time for some questions. And again, we'll close out at eight, but that gives us a nice time for conversation. I just wanted to start with one that one of you could answer, which is like, what is a robot anyway? And I was trying to understand it myself recently. Is it as simple as a Coke machine, a robot, because I give it input and it responds to me? Or does a robot have to have a heart and a courage and all that stuff? Like what is the minimal definition of a robot? And maybe one of you could take a shot at that. I'll take a shot. It's actually a very hard question to answer. You'll get a different answer. Use the microphone, please. You'll get a different answer from everyone. Is this a robot? We recently had this conversation and it really kind of boiled down to, when you think about a robot, a lot of people associate, well, it's gotta be humanoid. It's gotta look like a human to be a robot. Where a lot of people think, well, it doesn't necessarily have to look like a human. It has to have some sort of underlying functionality that makes it a robot, right? Then we started thinking about autonomy and automation, right? So there's kind of two camps who kind of split the line. So I'm gonna skirt giving an actual answer and just kind of explain the conundrum, right? Cause it really is a philosophical sort of argument. Since you make these very human looking robots, do you think a Coke machine is a robot? I think a Coke machine could be a robot. I mean, I think essentially an object that can take inputs, can understand something about the environment around it and then process those and make decisions about them and then react to those decisions through some of the things I talk about, sound, light, and motion. So that's definitely a cognitive. And I feel like, and obviously your work is like cognitive robots. But in the robotics world, we have this paradigm called Sense, Think, and Act. And so a robot has to have all three. So a Coke machine that can actually sense when, and it's not, when we say sensing, it's not just a programmed, okay, if you put the coin in, I sense the coin. But really sensing the environment, the changes, the dynamics, thinking about it, so every instance is different. So if I put in a fake quarter, it actually processes and takes an image and says, no, this is a fake quarter, and then act, which is, you know, making. Give you a can of Coke. Effect the environment in some way. Grab something or spit something out. And I think the semantics come in because before we had functional robots, we had fictional robots. And so so many of us, and our kids of the 70s, and we're really affected by Star Wars and Metropolis and things before that. And that defined robots in our mind. So we get this picture stuck there. We want things to talk to us and hand us things. How about a question from the audience? What's something you want to know about robots? Yeah, back here. Thanks. Yep, right. We specifically, that's actually a very great distinction. So, you know, the most effective prosthetic device you could imagine would be like sort of a, like an inspector gadget or a toolkit where you can look in a variety of different end effectors, we like to call them, and have a specific function for each one. The challenge from DARPA was though, you know, years of evolution have crafted our only tool into something that looks like this. So, you know, our bodies must have gotten something right. So if we can actually control a device that can move like our hand effectively, the thoughts are that that's the optimal, at least from a starting standpoint, you know, device to interact with the environment. Now, there's nothing to say that you couldn't, you know, chalk up this hand and climb a rock better, right? So, you know, thinking about, you know, sort of human evolution and how our hands were evolved from. Yeah, but I'm thinking of like Carla inventing a robot whose ears change color to show what you're thinking. Like that blew me away. Certainly. And that's like being able to think beyond what the human like, I wish my ears could do that. Maybe they can't see them. And then certainly along the lines of where technology could go, you know, the modularity, actually the wrist interface is sort of like a quick disconnect. So you could imagine a scenario where you chuck into these sort of superhuman strength, you know, arms or something that's like a rotary drill, for instance, right? Our wrists can't sort of turn, so we're sitting here turning a screwdriver. Let me just chuck in my screwdriver and I'll, you know, do some work that way. I like that. And there's a question back here. Thank you. Win Burleson, NYU, X Labs. We are innovating in health and technology and prosthetics. And we're looking at bringing in broad communities of individuals to help understand what their needs are and how they can be participants in the advancement of their own solutions. So that was mentioned earlier in your talks. And I just wanted to go through each of you if you could talk to us about what kinds of lessons you've learned in that process and what you think the opportunities for X Labs might be in this respect, how best to embark on this agenda. So the whole aspect of participatory design, which wasn't a new word, but I thought I had invented it till I actually saw it on Google. But really one of the things is a lot of times the user, because they don't have, it's not there yet. Sometimes asking open-ended questions doesn't give you the answers that you need. Like what would you really like in the world? It's like world peace. So it's kind of what really works is showing something and then saying, is this functional? It's like, no, well, why? Well, because it doesn't allow me to play the drums with my prosthetic. Oh, so how would you typically, and it's really this whole you draw them in, but then you provide them in open form and we find really good results that way. Yeah, I mean, I think for as a designer, change as a designer that participatory design is just very core to anything we do. And especially when you're talking about a dynamic system that is going to move or light up and change, it's just the lab part of it is so valuable because you can't show somebody a picture and have them understand that all of the things you do. So coming from my perspective, it's interesting because I'm involved with users of all different flavors. So on one project at APL, I might be involved with like a military user or a soldier who is interacting with a robot through a game console. On the RP program, it's a user who's actually either wearing it or having it next to them and trying to do something. But clearly, getting user feedback is important. One of the challenges is allowing the users to kind of understand and as also being a filter yourself as to the difference between sort of what the user wants or needs and sort of drawing the line between what is they need and what is actually possible to implement. So that's always an active challenge because you sit a bunch of users down who want to interact with something, they'll give you 10 things that they want, three of which are technologically probably impossible, right? And so you have to be not only gentle in the way you sort of let them down from realizing that that's not possible, but you sort of also don't want to constrain yourself by thinking that it's not possible, right? Because there's a lot of things that are possible that might not seem so on the surface. There's another question over here. Do you have a question? Yes, go ahead. I hear you talking about the strength, but I noticed that for one of those exhibitions, the person was trying to reach something, so it's like having the mirror and that effect. So how do you draw something closer to the object a little bit better? And then how do the senses work and is the perfect? And we can discern heat and cold. How important is that in terms of the continual use of it? It's extremely important actually that without the sensory feedback these devices are a fraction of what they potentially could be, right? And so the next patient that we're having come online here in a number of months is actually gonna have, so the patient you saw only had the motor decode side. So she was actually physically watching the limb system move and having to sort of close the loop visually, right? She did not have any sort of sensory feedback, but the vision was good enough to allow her to do a lot of tasking, but it could be better, right? And so we're really kind of just scratching the surface from a neural implant standpoint as to how we understand how those sensations actually feel, right? To a human being, there has yet to be a human study that takes one of those small microelectrode arrays and stimulates the amount of sensory cortex that's only been done in animal models. We're not exactly certain what it feels like. We do know that responses can be elicited, but we don't know what that sensation is, right? What that afferent feels like to the person. And so it's gonna be a real eye-opener when we find out that it actually does feel like a texture or it just feels like a tickle in my ankle, right? So we're not exactly sure, because if you get the mapping slightly wrong, you'll get that sensory map way off, right? So we're really just scratching the surface as to how these technologies can be integrated. But to answer your question, it's extremely important and it's sort of the deal breaker when it comes to these devices and how these devices are accepted by users. They talk about user acceptance. I feel like providing that sensory pathway back is gonna help bolster that user acceptance of these devices. So my question is kind of about bridging your different topics. So for instance, Carla, you spoke about eyebrows and then they weren't on there and the creepiness of the original robot. So it seems like what we have now are robots that we designed to look like robots based on what you were saying, kids of the 70s, et cetera. And then on the other extreme, we have making prosthetics with 28 out of 30 possible sensors in the hand. So my question, which is to everyone, is really where do you see us going in the future? Because there are two parallel paths, one that are making robots more human-like and the other that is imposing a human vision of what a robot is and then interacting with humans. Can I just say something that I feel like, what Matthew's talking about is sort of the robot as an extension of us and what you two are talking about is the robot as the other, right? As someone that will- As a sentient being in and of itself. Yeah, and that seemed like too, that there is two very different things and to what extent do we become the machine through the pacemaker and the cochlear implant That's a great question, right? And to what extent is it about the social life of machines that come to care for us and we care for them? Well, when I begin a design project, I actually like to think of it as this continuum. And sometimes I've actually drawn this out as a chart where I have friend on one side and prosthetic on the other and most products kind of fall in the middle, and we can think of, like if we think of something like the flip camera, which is designed to very easily be at the end of your hand and take that video really quickly with just your thumb. And it's kind of a prosthetic in a way. It needs to be an extension of our arm that we can pull out of our pocket really quickly and just have do what we want it to do in a really abstract way. Whereas there are other things like an oven that warns you when you get too close because it's gonna be hot and burn you, like that would be a friend, you know? And I mean, that's the way that I think about the kinds of projects that I work on. I had a question for you, which was, I was really impressed by the idea of deliberately making the robot stupid to engage the child and the child wanting to make it smarter and that was just like so counterintuitive and so fascinating as a model of interaction and human interaction. Do you do that with adults too? Because like adults have such contempt for stupidity but children like they wanna care for that. They wanna care for that. But I think it goes with the creepiness feeling. For example, if you had a robot that looked too human, you wouldn't really wanna interact with it. I think if you had a robot that was too intelligent. Jude Law. Or it's an old movie. They call it the uncanny valley, right? Where like you look at it and you like, you know that's fake but it's kind of creepy because it really actually does look realistic. I can draw a parallel. We have a cosmetic cover device and we had one that was designed to really try and as closely as possible mimic what the human hand looked like but it still couldn't match it, right? Because from day to day, the color of your skin changes, right? And so we obviously, we could but we didn't decided not to make a chameleon cosmesis, right? So a lot of times we either forego it or just make it like a clear color, right? Because then it's clearly not a human hand. And so it sort of removes that ambiguity. Okay, any more questions but yeah, back here. Thank you for your speech and I feel so connected and very nervous about that because I'm the occupational therapist in Taiwan and I'm just recently come to spend my Chinese New Year with my sister but she just registered this speech for me. I feel so touching. Because our hospital has just introduced three kinds of robots in our hospital and one for the lower extremity and one for the arm and one for the hand. And one for the hand especially is for the SEMG signal for the training because we really believe in the motor re-learning of our brain. And I really look into them. We have seen the patients with much improvement because of the training of the repetitiveness. And however we haven't seen because we don't have the facility to see looking to the patient's EEG signal whether it is improving their brain or not. So I'd like to use. Great comments. One of the things we like to think about is taking a whole superset of signal possibilities. You can think of surface electromography that as he was mentioning. You can also think about EEG which is electroencephalography which is sort of exterior to the brain but sort of whole brain activity which is sort of listening external and then you can think of these implantable devices that actually go into the brain and really listen to the brain. Maybe the best solution is a whole superset of all those technologies fused together because certain implant technologies might be better or more plastic or more adaptable than others. So it's actually a patient-based basis. Certain patients like quadriplegics they can't really move their residual musculature surface electromography is difficult whereas with a long residual limb amputee they might benefit from EEG plus surface electromography and if proven out effectively a wireless implantable device if the sort of regulatory approvals are granted for that device which is under design right now. So it could be a fusion of those input modalities it's the best solution. So I sense from the audience that basically we have a lot of robot lovers here and very like sympathetic to robots. Is there anybody who's really like freaked out and like no let's like stop this robot uprising before it happens like any other points of view like skeptics, robot skeptics that would like to voice their fears for the future of humanity. Yes, thank you. So what's interesting is that because robots are social what happened and we actually see this in our kids is that you start translating that social interaction to others around you. It's actually founded with our children with autism. They'll interact socially with the robot and start to transplant that interaction to those around them. They learn to be social from the machine. I'm an elderly woman whose mother died of Alzheimer's I expect to get at myself pretty soon but I live alone and I am addicted to Facebook. I mean it's really a serious problem. And so it had stopped me from making myself go to the 92nd Street. Why to be with other elderly people and stuff. So I love technology. I gotta tell you I'm learning to code because I would like to as you are doing things for autistic children I would like to find a way to do things of that sort for elderly with Alzheimer's because it's a very time consuming job to be the caregiver of someone with Alzheimer's because you have to give them attention almost 100% of the time. And it would be good if you could give them an iPad or some inexpensive version. So you think a robot would be a good caretaker and perhaps patient and amused. Not to be left alone with the person but yeah. Well there is actually some power out. So there is some in fact, it's FDA certified now where they're using a socially interactive robot as a seal with patients with dementia and they're actually showing increased understanding. They're starting to be a little more cognitively aware of their surroundings. So yes, now they're in typically nursing homes. So there is care around them but they're also seeing that even the residents are starting to socialize more among themselves even just because there's this coming. And emoticons on Facebook don't do that. But they're good, we like them. So we really have time for one more question and then we're gonna say good night. So who would like to ask that, yes, thank you. So this is a question about building emotional connections with robots. So with the, specifically with products, I think it's interesting with the vacuum cleaner example, the idea of creating a more emotive vacuum that you might build a bond with. And just thinking about the outcome of that, like what is the responsibility of the designer to, where's that limit for emotional connection? So what happens when your vacuum dies? Is that something that, what's the effect that that will have building having had built this emotional connection? Sorry, I just wanna do a quick corollary. So just to kind of further kind of found the question or make it a little bit more, you know, tough for us to understand. So Jan in particular, the woman you saw controlling it through the neural implant, she named her robot Hector, right? And so that shows some sort of emotional connection to the device. And she actually wrote us a very heartfelt email when she, the study was over, talking about how she couldn't imagine her life without doing this study, right? To have this opportunity was so great for her and to interact with Hector on a daily basis. And so she kind of phrased it in sort of in those terms. And she talked about, you know, it's one of those things that like, if I didn't have it, right? My life would be worse off. So it's like one of those things, with any relationship, you expect that at some point, maybe it breaks down, but that doesn't preclude you or prevent you from forming those bonds. Do you want us to have the last word on that? Yeah, I mean, I think that there are a lot of moral issues to think about with the emotional connection, particularly as product designers, we are dependent often on large manufacturers that have motivations that may not necessarily be human good, maybe more related to dollar signs than things that are about human good. And so I think that it's still a designer's responsibility to really think about that and, you know, hopefully act in the power of good. And, you know, but I think that we do have this power to create this emotional connection. And there have been a lot of studies where with products, people don't want to give up the one that they have, whether it's a vacuum cleaner or I have some friends who've been doing a weight loss coach and, you know, it would be an identical machine, but they want the one that they had because they had built a connection with it. And I think it comes from this combination of the physical and then the computational because there's this great film, Robot and Frank. And in that film, there's this really difficult moment of having to turn off the robot after the robot has become the friend of the main character and has remembered, you know, things that they've done together and has learned from him and he has, I won't spoil it for you, but. I don't know if I would ever feel that way about a vacuum cleaner there, like no matter what. What if your vacuum cleaner talked to you? Well, he does. Okay, well, that has been really great. You guys are amazing. And you can come back and see it on YouTube. We will be the next YouTube sensation and thank you all for coming to Cooper here.