 Welcome to ASU KED Talks, the podcast. I'm your host, Diane Boudreau, and I'm here today with Nancy Cook, an ASU professor of human systems engineering. Nancy studies teamwork among humans, robots, and artificial intelligence. Welcome, Nancy. Hi. Thanks for having me. So first of all, what is human systems engineering? Wow. That's a good question. It's the bringing together of psychology and engineering. I say it's the marriage of psychology and engineering, but it's designing systems, devices, machines, robots, anything that a human might come in contact with to be more compatible with human capabilities and limitations. Okay. And it used to be called human factors. Is that correct? Human factors is another word for it. We named our program human systems engineering because we really take a systems approach to our work. We like the word systems and we're in an engineering college. So it was appropriate, but it's really a synonym. And how did this field come into being? In World War I, there were a lot of accidents with the fighter jets. A lot of them were because there were controls that looked the same, like levers that looked the same, but one of them let down the flaps and the other did something else. And so people would the pilots would mistakenly pull the wrong lever and it soon became realized that we needed to do some work in this area. And so it really, human factors really was born in aviation in the war time. Okay. And recently you gave a talk about how human factors can change the world in terms of social issues. Can you give us an example of that? Yeah. I think human factors is much bigger than when it was born back in the time of aviation when we were dealing with planes and then it soon became associated with knobs and dials. And nowadays even people think that what we do is design office chairs. And it's much more than that. It's much bigger than that. There are human factors people working in medicine and in transportation and in computing, user experience. We were all over the place. But I think it's even bigger than that. And that's what my talk was about, that it's about social systems. They're also systems that can be impacted by human factors because they are systems like sometimes call them socio-technical systems or systems of people and technology and lots of it working together. And the example I used in my talk of this kind of social system transformation that could be partly impacted by human factors is the city of Medellin, Colombia. I went there for a conference one year and was taken on a tour of the city and told about the story of this amazing transformation where the city was, well, it's kind of the opposite of Beverly Hills. The poorest people live on the top of the mountain and the kids build houses on top of the parents' houses. And so they're kind of stacked barrio houses on the mountain. And they were geographically divided by the richer people on the bottom in the valley and by the culture and by the education. And so World Bank and others came together to try to address this divide. It was not only that, it was a killing capital of the world at one time due to Pablo Escobar. And so not only did you have this geographical divide, but you had rampant murder, drug cartels, guerrilla warfare and all of that. So the people on the top of the hill were not happy. And there's a lot of gang warfare as well. So the World Bank and others got together and decided to do a few things. They made a really rich transportation system. So cable cars to connect the top of the mountain to the bottom of the mountain, even escalators, which interestingly I saw dogs and cats using that run up and down the mountain. So now instead of taking two and a half hours of a walk to get down the mountain where your job is or where the grocery store is, it would take people five or 10 minutes. There's also a light rail system. And they started putting cultural centers up on top of the mountain. So they built library parks, they called them. It looks kind of like a school, but it's a community center where people can come together and you can have people of all ages. They can have events there. They can get on the internet. They can learn how to do resumes. And so it's bringing education and culture to the people who are on the top of the mountain. And this changed things dramatically. The murder rate went way down. People started feeling good about themselves who are on the top of the mountain. They started building or painting beautiful graffiti murals. And it's really just transformed the whole city of Medellin. Now I tried to find out whether human factors was involved. But definitely when you think about the transportation system, that's a system of technology that unites people. They should have been involved. And one thing, one group that was involved were the gang members themselves. They participated in this transformation. And so that tells me that they were taking the user into account, which is what we like to do in human factors. And your current research focuses on teamwork between robots and humans and artificial intelligence. What would these kinds of teams be used for? Well, in all walks of life, from Amazon distribution centers where you have people working alongside robots that are picking up boxes to transportation systems, autonomous vehicles on the road, autonomous trucks, either carrying people, carrying goods, medicine, robotic surgery, and then in military. All walks of life, you see humans and robots starting to come together, humans and AI agents starting to come together to do work. Okay. So what are some of the biggest challenges in getting those teams to work effectively? I think, yeah. So just putting them in the room together is not enough. You need to really think about what are the appropriate roles of each of the agents? Who does what best? We don't want to replicate ourselves in robots because we do some things very well, but there's some things we can't do. And that's what the robot should be doing. We also need to figure out what the best way, the best communication between robots and humans and AI agents, what would work the best. Natural language may be overkill for some of what we need to do, so maybe we need something more like signaling, like in human dog teaming. So tell us a little bit about the research that you're doing right now on teamwork between humans and robots and artificial intelligence. Okay. In one of my labs, we have a setup where we have typically three people controlling a single simulated uninhabited aerial vehicle or an autonomous vehicle. So they're controlling automation basically. We've collected data in this lab on all human teams for many, many years, probably near 20. And in recent years, we've put an intelligent AI agent developed by the Air Force Research Lab into the pilot's seat of that simulator. So we have now the pilot is a synthetic agent working with two humans. One of them is a sensor operator or photographer and the other is basically a navigator. And the task requires them to interact, to communicate, to coordinate, to take pictures of ground targets. So that's their job as a team and they get scored on how well they do that job as a team. It's interesting to then compare the teams of all humans to teams of two humans with this one agent. The agent, because we wanted to see in this research treat it kind of like a Turing test and to find out what are the essential aspects of teamwork that people may be privy to that the agent isn't. And indeed the agent, its first rendition that is turned out not to be a very good team player. Why is that? The agent did not anticipate the information needs of others. So when people come onto a team and even into our lab, they know certain things about being on team. They know they're probably going to be other people there and that they probably are going to need something from those people. And in turn, they're going to have to give something back to those people. The synthetic agent act like it was the only agent in the room. And it constantly asked for the information that it needed. And in order to get the information that the human teammates needed, they had to directly ask. So we say that the agent pulled more information than it pushed. It didn't anticipate the needs of others. Eventually, when you get to be good in this task, you know that somebody's going to need this piece of information at a particular time and you give it in advance so they can have it when they need it. It did not do this. So how did the people respond to that? The people, interestingly, started becoming more like the agent. They stopped sharing information too. They stopped pushing information too. And soon enough, everybody was pulling information. And as a result, the team, the human agent team, wasn't very good, especially when it came to difficult situations or novel events or we call them perturbations. They weren't very coordinated or flexible or adaptive as the human teams were. When you think about communication with AI too, I keep thinking about Siri or Alexa. And I know some people will say, oh, Siri hates me. And there's this tendency to personify, I think, some of our technology. Do you see that when you see people working with artificial intelligence, do you see people projecting human qualities onto them? And is that problematic or is that good? Yeah. So that's interesting. We're looking at that in one of our studies. It's called anthropomorphism, projecting qualities of humans onto these robots, these artificial entities. Yeah. And we see people doing it. In fact, I hear that soldiers sometimes don't want to send their robots into harm's way because they get kind of attached to them. So people do get attached. And I think that's a little bit problematic in the case of humanoid kind of robots. And these are robots that have human characteristics physically. They're kind of creepy because they're not completely human. So they're not, you can't fool someone that that's not a robot. But that might be problematic because although people may like it, it suggests that the robot has qualities that the robot may not have or has capabilities that the robot may not have, like to really sense your feelings or to understand emotion. And they're working on that too, AI that understands emotion. But I question why we need to be best friends with our robots. At one point in our discussion, I think you mentioned that sometimes the human agents would start treating the AI badly. They would start just sort of barking commands at it and things like that. And I'm a little bit curious. Is that necessarily a bad thing if you know that you're working with an AI? Because you know that it's not going to get its feelings hurt. Should humans have to modulate their behavior and treat AI more like people? Or do you think it's okay that we treat different agents differently? Yeah. So with the anthropomorphizing thing, you are treating it more like people. And in some cases, they don't. So in this one experiment where I had three people come into the lab in that same simulator for unmanned aerial vehicles, I told two of the people that they were interacting with an agent as a pilot when it was really just an unsuspecting participant. And they treated that participant differently. They barked more orders at it. They kept it out of the loop more. They didn't really treat it as a teammate. And so what that tells me is that people aren't ready to treat agents like that as teammates. They still want to do what we call supervisory control. They want to be in charge and tell them what to do. We see that a lot in across our different experiments that people want to tell autonomy what to do. And that will stand in the way of teaming. So that's another issue that we have to deal with. And a lot of it's people's attitudes and trust in the autonomy. Another condition that we ran in that same experiment we call the experimenter condition. So instead of putting the synthetic agent in the pilot seat, we put a very skilled experimenter from my lab in that seat who knew how to do the pilot job. They worked the humans in the experiment. We're told that they were working with a synthetic agent. But this experimenter synthetic agent would push and pull information and kind of model how to do that. So when the information wasn't coming, that person would ask that experimenter would ask for the information, kind of modeling how to do this coordination, push and pull in a timely manner. And those teams, in contrast to the agent teams, were very, very good. They were better than all human teams. And they were better at coordinating, more adaptive just by this subtle pushing and pulling coaching that the experimenter was doing. So in that case, two humans became more like the experimenter and then trained in that direction showing that if you put one really good agent on a team, you can also raise the team up. So if we had autonomous agents that were really, really good, people could learn from them potentially. Yes. Do you think that the reverse could happen, that the AI could learn from people if we can develop it well enough? I think, yeah, well, that's the idea with machine learning. So the AI could learn just like a person learns how to do a particular part of the team task. Okay, changing course a little bit here. When we met previously, you mentioned having a pretty unique hobby of hot air ballooning. Can you tell us how you got into that? Through my current husband, actually, he was at a hot air balloon rally with one of my girlfriends and she was going to stop by my house at the time in New Mexico. It was on a weekend. We were all going to do something together. And so I met him through her and started hot air ballooning with him. I had my first hot air balloon ride over White Sands National Monument and that was pretty incredible. And so I've been hooked ever since. I'm not a pilot, but I'm a crew. And do you own your own balloon? I mean, how does that work? In fact, it's my husband's balloon. He actually made the balloon. He has a friend in Georgia named Tarp Head who is a manufacturer of balloons and he went to visit him years ago and made the basket, wove the basket and made the balloon. Wow. He's a good seamstress. Wow. That's amazing. What struck me as interesting about it was when you were talking about it, you said that it is also a team activity. And I never thought about it in that light, but of course with something that huge, you're going to need a team. Can you tell us a little bit about how that works? Yeah. So you can't do ballooning without a chase crew because you need somebody to be there when you land. You need help assembling the balloon and taking it down at least these larger balloons. There are one single person balloons, but with just a little chair, but this is not what I'm talking about. You need a team of people to assemble the balloon. And it's kind of an interesting team task because there's a lot of noise. There's a burner and a fan at one end of the balloon. The basket's laid down when you're trying to inflate it. The pilot is in the basket that's overturned and you have a crown line at the other end at the top of the balloon trying to hold it steady while it's inflating. So there has to be some coordination between the person that's at the top of the balloon and the pilot. They have to understand how things are supposed to work and how fast this is supposed to go up because a lot of mistakes could be made. And the rest of the crew has to know their jobs, know what to do. It's all, it's a pretty interesting team task actually. Do you have any great stories or interesting stories of times maybe the teamwork didn't work out so well? Yes, so this is kind of an interesting cultural story. I was, we were ballooning in France, which is like a fantastic thing to do. I've done it a couple of times now over the wine regions of France. You go ballooning and land in farmers' yards and things and have great after ballooning parties with some of their homemade bread and cheese and wine. It's fabulous, but at this one time we had an all-French male crew and I had my six-month-old daughter with me in the chase vehicle and she was colicky. She is crying all the time and I don't know French that well so I had a French dictionary and I was trying to translate and my husband's in the balloon with a radio talking to me. I'm translating back to the crew. I have a map that I'm looking at talking to my husband about the map trying to keep the colicky daughter settled down. The French guys were yelling at me that we have to take the baby to a hospital and I'm like no, no. But then when I said we need to go, they said no, we're going to have a picnic now and I said but you know my husband's telling us to go and they didn't care and they also they would go right when I said we need to go left here and I thought maybe my French is just that bad and it turned out that they really didn't like taking orders from a woman. Oh my gosh. So they were kind of read the riot act by my husband and then things got better and the baby didn't go to a hospital and we all got along. They were very nice just cultural differences and teamwork are kind of interesting. So coming back to teamwork with robots and AI, do you think that incorporating some of these artificial agents into teams will help with some of those biases and cultural differences or do you think that they might get incorporated into the AI as well? Yeah that's interesting too. So you could consider the AI or the robots themselves to have their own culture, right, that they maybe have their own kinds of characteristics are certainly not always like humans. You can also see the AI learning from the human teammates so it might pick up on some of the characteristics of those humans. In our lab we see that we call it entrainment where we have an AI agent that's not too much of a team player, really kind of selfish and doesn't share information as much as it should and sort of the opposite thing happens there. The humans start becoming like the synthetic agent so they definitely can have an influence on a group and it's kind of scary to think about that that why the humans would model themselves after a synthetic agent. That is interesting and I'm glad that you're thinking about it. Well I appreciate you joining us today thank you very much for being here. Thanks for having me I've enjoyed it. If you're interested in more from Nancy Cook watch the ASU KED Talks video at research.asu.edu slash KED Talks. Subscribe to our podcast through your favorite podcast directory and find us on Facebook and Twitter at ASU Research.