 From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Hello, and welcome to the CUBE studios here in Palo Alto, California. We have a special around the CUBE segment, Unpacking AI. This is a Get Smart series. We have three great guests, Roger Sheth, VP of AI and Product Management at Google. He has all the AI development for Google Cloud. Dr. Kate Darling, research specialist at MIT Media Lab and Professor Barry O'Sullivan, Director, SFI Center for Training AI, University of College, Cork and Ireland. Thanks for coming on, everyone. Let's get right to it. Ethics in AI, as AI becomes mainstream, moves out of the labs and computer science world to mainstream impact. The conversations are about ethics. And this is a huge conversation, but first thing people want to know is what is AI? What is the definition of AI? How should people look at AI? What is the definition? Let's start there. Roger. So I think the way I would define AI is any way that you can make a computer intelligent to be able to do tasks that typically people use to do. And what's interesting is that AI is something, of course, that's been around for a very long time in many different forms. Everything's from expert systems in the 90s all the way through to neural networks now. And things like machine learning, for example, people often get confused between AI and machine learning. I would think of it almost the way you would think of physics and calculus. Machine learning is the current best way to use AI in the industry. Kate, your definition of AI, do you have one? Well, I find it interesting that there's no really good universal definition. And also I would agree with Rajan that right now we're using kind of a narrow definition when we talk about AI, but the proper definition is probably much more broad than that. Probably something like a computer system that can make decisions independent of human input. Professor Barry, your take on the definition of AI, is there one? What's a good definition? Well, yeah, so I think AI has been around for 70 years and we still haven't agreed the definition for it, as Kate said. And I think that's one of the very interesting things. I suppose it's really about making machines act and behave rationally in the world. Ideally autonomously, so without human intervention. But I suppose these days AI is really focused on achieving human level performance in very narrowly defined tasks. So game playing, recommender systems, planning. So we do those in isolation. We don't tend to put them together to create this sort of fabled artificial general intelligence. I think that's something that we don't tend to focus on at all actually, Okay, so the question is, is that AI is kind of elusive, it's changing, it's evolving. It's been around for a while, as you guys pointed out. But now that's on everyone's mind. We see the news every day from Facebook, being a technology platform with billions of people. AI was supposed to solve the problem there. We see new workloads being developed with cloud computing where AI is a critical software component of all this. But that's a geeky world. But the real world, there's an ethical conversation, is not a lot of computer scientists have taken ethics classes. So who decides what's ethical with AI? Professor Barry, let's start with you. What are we going to start with ethics? Yeah, sure. So one of the things I do is I'm the vice chair of the European Commission's high level expert group on artificial intelligence. And this year we published the ethics guidelines which were the AI in Europe, which is all about, you know, setting an ethical standard for what AI is. You're right, you know, computer scientists don't take ethical standards. I suppose what we are faced with here is a technology that's so pervasive in our lives that we really do need to think carefully about the impact of that technology on human agency and human wellbeing, on societal wellbeing. So I think it's right and proper that we're talking about ethics at this moment in time. But of course we do need to realize that ethics is not a panacea, right? So it's certainly something we need to talk about, but it's not going to rid us of all of the detrimental applications or usages of AI that we might see today. Kate, your take on ethics. Start all over, throw out everything, build on it. What do we do? Well, what we do is we get more interdisciplinary, right? I mean, because you asked who decides until now it has been the people building the technology who have had to make some calls on ethics. And it's not necessarily the way of thinking that they are trained in. And so it makes a lot of sense to have projects like the one that Barry is involved in where you bring together people from different areas. I think we lost Kate there. Roger, why don't you jump in, talk about it. You decide issues of responsibility for harm. We have to look at algorithmic bias. We have to look at supplementing versus replacing human labor. We have to look at privacy and data security. We have to look at the things that I'm interested in, like the ways that people anthropomorphize the technology and use it in a way that's perhaps different than intended. So depending on what issue we're looking at, we need to draw from a variety of disciplines. And fortunately, we're seeing more support for this within companies and within universities as well. Roger, your take on this. So I think one thing that's interesting is to step back and understand why this moment is so compelling and why it's so important for us to understand this right now. And the reason for that is that this is the moment where AI is starting to have an impact on the everyday person. Any time I present, I put up a slide of the Mosaic browser for 24. And my point is that that's where AI is today. It's at the very beginning stages of how we can impact people even though it's been around 70 years. And what's interesting about ethics is we have an opportunity to do that right from the beginning right now. I think that there's a lot that you can bring in from the way that we think about ethics overall. And so for example, in our company, you all hear me? Yep. Okay, we've hired an ethicist within our company from a university to actually bring in the general principles of ethics and bring that into AI. But I also do think that things are different. So for example, bias is an ethical problem. However, bias can be encoded and actually given more legitimacy when it can be encoded in algorithms. So those are things that we really need to watch out for where I don't think it is a little bit different than a little bit more interesting. This is a great point. Okay, go ahead. What's your point of, yeah, just one interesting thing bear in mind that I think Kate said this and I just want to echo it is that AI is becoming extremely multidisciplinary. And I think it's no longer a technical issue. Obviously there are massive technical challenges but it's now become as much an opportunity for people in the social sciences, the humanities, ethics people, legal people, I think need to understand AI. And in fact, I gave a talk recently at a legal symposium and the idea of this sort of parallel track of people who have legal expertise and AI expertise. I think that's a really fantastic opportunity that we need to bear in mind. So unfortunately, us nerds, we don't own AI anymore. It's something we need to interact with real world on a significant basis. You know, I want to ask a question because the algorithms, everyone talks about the algorithms and the bias and all that stuff. It's totally relevant. Great points on interdisciplinary. But there's a human component here. As AI starts to infiltrate the culture and hit everyday life, the reaction to AI sometimes can be, whoa, my job's going to get automated away. So I got to ask you guys, as we deal with AI and is that a reflection on how we deal with it to our own humanity? Because how we deal with AI from an ethics standpoint, ultimately is a reflection on our own humanity. Your thoughts on this. Roger, we'll start. I mean, it is. Oh, sorry. Roger? So I think it is. And I think that there are three big issues that I see that I think are reflective of ethics in general, but then also really are particular to AI. So there's bias. And bias is an overall ethical issue that I think is particular here. There's what you mentioned, the future of work. What does the workforce look like 10 years from now? And that changes quite a bit over time. And if you look at the workforce now versus 30 years ago, it's quite a bit different. And AI will change that radically over the next 10 years. The other thing is what is good use of AI and what's bad use of AI? And I think one thing we've discovered is that there's probably 10% of things that are clearly bad and 10% of things that are clearly good and 80% of things that are in that gray area in between where it's up to kind of your personal view. And that's the really, really tough part about all of this. Kate, you have been a way in. Well, I think that I'm actually going to push back a little because not on Rajan, but on the question because I think that one of the fallacies that we are constantly engaging in is we are comparing artificial intelligence to human intelligence and robots to people. And we're failing to acknowledge sufficiently that AI has a very different skill set than a person. So I think it makes more sense to look at different analogies, for example, how have we used and integrated animals in the past to help us with work? And that lets us see that the answer to questions like, will AI disrupt labor market? Will it change infrastructures and efficiencies? The answer is yes, but will it be a one-to-one replacement of people? No, that said, I do think that AI is a really interesting mirror that we're holding up for ourselves to answer certain questions like, what is our definition of fairness, for example? We want algorithms to be fair. We want to program ethics into machines. And what it's really showing us is that we don't have good definitions of what these things are, even though we thought we did. All right, Professor Barry, your thoughts. Yeah, I think many points one could make here. I suppose the first thing is that we should be seeing AI not as a replacement technology, but as an assistive technology. It's here to help us in all sorts of ways to make us more productive and to make us more accurate in how we carry out certain tasks. I think absolutely, the labor force will be transformed in the future, but there isn't going to be massive job loss. The technology has always changed how we work and play and interact with each other. Look at the smartphone. The smartphone is 12 years old. We never imagined in 2007 that our world would be the way it is today. So technology transforms very subtly over long periods of time, and that's how it should be. I think we shouldn't fear AI. I think the thing we should fear most, in fact, is not artificial intelligence, but it's actual stupidity. So I think we need to... So I would encourage people not to think... It's very easy to talk negatively and think negatively about AI because it is such an impactful and promising technology, but I think we need to keep it real a little bit, right? So there's a lot of hype around AI that we need to sort of see through and understand what's real and what's not, and that's really some of the challenges we have to face. And I suppose one of the big challenges we have is how do we educate the ordinary person on the street, understand what AI is, what is capable of, when it can be trusted and when it cannot be trusted, and ethics gets us some of the way there, but it does not get us all of the way there. We need good old-fashioned, good engineering to make people trust in the system. As a great point, ethics is kind of a reflection on that mirror, I love that. Kate, I want to get to that animal comment about domesticating technology, but I want to stay in this culture question for a minute. If you look at some of the major tech companies, like Microsoft and others, the employees are revolting around their use of AI in certain use cases. It's a knee-jerk reaction around, oh my God, we're using AI, we're harming the world. So we live in a culture now where it's becoming more mission-driven, there's a cultural impact, and to your point about not fearing AI, are people having a certain knee-jerk reaction to AI because you're seeing cultures inside tech companies and society taking an opinion on AI? Oh my God, it's definitely bad, our company's doing it, we should not service those contracts or maybe I shouldn't buy that product because it's listening to me. So there's a general fear. Does this impact the ethical conversation? How do you guys see this? Because this is an interplay that we see that's a personal, it's a human reaction. Yeah, so if I may start off, I suppose absolutely there are, the ethics debate is a critical one and people are certainly fearful. There is this sort of polarization in debate about good AI and bad AI, but AI is good technology. It's one of these dual use technologies. It can be applied in bad situation in ways that we would prefer it wasn't and it can also, it's a force for tremendous good. So we need to think about the regulation of AI, so what we wanted to do from a legal point of view, who is responsible, where does liability lie? We also think about what our ethical framework is and of course there is no international agreement on what is, there is no universal code of ethics. So this is something that's very, very heavily contextualized, but I think we certainly, I think we generally agree that we want to promote human well-being, we want to have a prosperous society, we want to protect the well-being of society. We don't want technology to impact the society in any negative way. It's actually very funny, if you look back about 25, 30 years ago, there was a technology where people were concerned that privacy was going to be a thing of the past, that computer systems were going to be tremendously biased because data was going to be incomplete and not representative and there was a huge concern that good old fashioned databases were going to be the technology that would destroy the fabric of society. That didn't happen. And I don't think we're going to have AI do that either. Kate? Yeah, we've seen a lot of technology panic that may or may not be warranted in the past. I think that AI and robotics suffers from a, the civic problem that people are influenced by science fiction and prop culture when they're thinking about the technology and I feel like that can cause people to be worried about some things that maybe perhaps aren't the thing we should be worrying about currently, like robots and jobs or artificial superintelligence taking over and killing us all aren't maybe the main concerns we should have right now, but algorithmic bias, for example, is a real thing, right? We see systems using data sets that disadvantage women or people of color and yet the use of AI is seen as neutral, even though it's entrenching existing biases or privacy and data security, right? You have technologies that are collecting massive amounts of data because the way that machine learning works is you use lots of data. And so there's a lot of incentive to collect data. As a consumer, there's not a lot of incentive for me to want to curb that because I want the device to listen to me and to be able to perform better. And so the question is who is thinking about consumer protection in this space all of the incentives that we're collecting and using as much data as possible. And so I do think there is a certain amount of concern that is warranted and where there are problems. Like I endorse people revolting, right? But I do think that we are sometimes a little bit skewed in our, you know, understanding where we currently are at with the technology with the actual problems are right now. Rajan, I want to give thoughts on this is education is key. As you guys are talking about this and things to pay attention to, how do you educate people about the, how to shape AI for good at the same time, calm the fears of people at the same time from revolting around misinformation or bad data around what could be? Well, I think that the key thing here is to organize kind of how you evaluate this. And back to one thing I was saying a little bit earlier, it's very tough to judge kind of what is good and what is bad. It's really up to the personal perception. But then the more you that you organize how to evaluate this and then figure out ways to govern this, the easier it gets to evaluate what's in or out. So one thing that we did was that we created a set of AI principles and we kind of codified what we think AI should do. And then we codified areas that we would not go into as a company. The thing is it's very high level. It's kind of like the constitution. And when you have something like the constitution, you have to get down to actual laws of what you would do. It's very hard to bucket and say, these are good use cases, these are bad use cases. But what we now have is a process around how do we actually take things that are coming in and figure out how do we evaluate them? A last thing that I'll mention is that I think it's very important to have many, many different viewpoints on it. Have viewpoints of people that are taking it from a business perspective, have people that are taking it from kind of a research and an ethics perspective and all evaluate that together. And that's really kind of what we've tried to create to be able to evaluate things as they can. Well, I love that constitution angle. We'll have that as our last final question in a minute but do we do a reset or not? But I want to get to the point that Kate mentioned. Kate, you're doing research around robotics. And I think robotics is a, you're seeing robotics surge in popularity from high schools, have varsity teams now. You're seeing robotics with software advances and technology advances really become kind of a playful illustration of computer technology and software where AI is playing a role and you're doing a lot of work there. But as intelligence comes in to say robotics or software or AI, there's a human reaction to all this. So there's a psychology interaction to either AI and robotics. Can you guys share your thoughts on the humanization interaction between technology? As people stare at their phones today there could be relationships in the future. And I think robotics might be a signal. You mentioned domesticating animals as an example back in the early days of when we were, as a society, that happened. Now we all have pets. Are we going to have robots as pets? We're going to have AI pets. Is this kind of the human relationship? Okay, go ahead. Share your thoughts. So okay, the thing that I love about robots and in some applications to AI as well as that people will treat these technologies like they're alive even though they know that they're just machine. And part of that is again the influence of scientific culture that kind of primes us to do this. Part of it is the novelty of the technology moving into shared spaces. But then there's this actual biological element to it where we have this inherent tendency to anthropomorphize, project human-like traits, behaviors, qualities onto non-humans. And robots lend themselves really well to that because our brains are constantly scanning our environments and trying to separate things into objects and agents. And robots move like agents. We are evolutionarily hard-bired to project in tents onto the autonomous movement in our physical space. And this is why I love robots in particular as an AI use case because people end up treating robots totally differently. Like people will name their Roomba vacuum cleaner and feel bad for it when it gets stuck. Which they would never do with their normal vacuum cleaner, right? So this anthropomorphization, I think, makes a huge difference when you're trying to integrate the technology because it can have negative effects. It can lead to inefficiencies or even dangerous situations. For example, if you're using robots in the military as tools and they're treating them like pets instead of devices. But then there are also some really fantastic use cases in health and education that rely specifically on this socialization of the robot. You can use a robot as a replacement for animal therapy where you can't use real animals. We're seeing great results in therapy with autistic children, engaging them in ways that we haven't seen previously. So there are a lot of really cool ways that we can make this work for us as well. Barry, your thoughts, have you ever thought that we'd be adopting AI as pets someday? Oh, yeah, absolutely. Like Kate, I'm very excited about all of this too. And I think there's a few, I agree with everything Kate has said. Of course, coming back to the mark you made at the beginning about people putting their faces in their smartphones all the time. We can crowdsource our sense of dignity and that we can't have social media as the currency for how we value our lives or how we compare ourselves with others. So we do have to be careful here. Like one of the really nice things about, one of the really nice examples of an AI system that was given some significant personality was quite recently, Thomas Santom and others at Carnegie Mellon had this, produced this libratus poker playing bot. And this AI system was playing against these top-class Texas Holden players and all of these Texas Holden players were attributing a level of cunning and sophistication and mischief on this AI system that clearly it didn't have because it was essentially trying to just behave rationally. But we do like to project human characteristics onto AI systems. And I think what would be very, very nice and something we need to be very, very careful of is that when AI systems are around us and particularly robots, we do need to treat them with respect. And what I mean is, we do make sure that we treat those things that are serving society in as positive and nice a way as possible. I do judge people on how they interact with the least advantaged people in society. And by golly, I will judge you on how you interact with a robot. Roger. We've actually done some research on that where we've shown that if you're low empathy, you're more willing to hit a robot, especially if it has a name. I love all my equipment here. Oh, yeah. I've got to tell you, it's all beautiful. Roger, I mean, computer science and now AI is having this kind of humanization impact. This is an interesting shift. I mean, this is not what we study computer science. We were writing code. We were going to automate things. Now there's notions of math and not just math, cognition, human relations. What are your thoughts on this? Yeah. You know, what's interesting is that I think ultimately it boils down to the user experience. And I think there is this part of this which is around humanization, but then ultimately it boils down to what are you trying to do and how well are you doing it with this technology? And I think kind of, I think that example around the Roomba is very interesting where I think people kind of feel like this is more kind of a, almost like a person. But also I think we should focus as well on what the technology is doing and what the practice have. My best example of this is Google Photos. And so my whole family uses Google Photos and they don't know that underlying into some of the most powerful AI in the world, all they know is that they can find pictures of our kids and their grandkids on the beach anytime that they want. And so ultimately I think it boils down what is the AI doing for the people and how is it? Yeah, expectations become the new user experience. I love that. Okay guys, final question. Now it's a humanization. We talked about the robotics, but also the ethics here. Ethics reminds me of the security debate and security in the old days. Do you increase the security or you throw it all away and start over? So with this idea of how do you figure out ethics in today's modern society with it being a mirror? Do we throw it all away and do a do over? Can we recast this? Can we start over? Do we augment? What's the approach that you guys see that we might need to go through right now to really not hold back AI, but let it continue to grow, accelerate, educate people, bring value and user experience to the table. What is the path? We'll start with Barry and Kate and then Ron. I guess I can kick off. I think ethics gets us some of the way there, right? So obviously we need to have a set of principles that we sort of sign up to and we agree upon. And there are literally hundreds of documents on AI ethics. There are, I think in Europe, for example, there are 128 different documents around AI ethics, I mean policy documents. But we have to think about how are we going to actually make this happen in the real world? And I think if you take the aviation industry, we trust in airplanes because we understand that they're built to the highest standards, that they're tested rigorously, and that the organizations that are building these things are held account when things go wrong. And I think we need to do something similar in AI. We need good, strong engineering. And ethics is fantastic. And like I'm a strong believer in ethical codes, well, we do need to make it practical. And we do need to figure out a way of having the public trust in AI systems that comes back to education. So I think we need a general public and indeed ourselves to be a little more cynical and questioning when we hear stories in the media about AI, because a lot of it is hyped. And that's because researchers want to describe their research in an exciting way, but also newspaper people and media people want to have a sticky subject. But I think we do need to have a society that can look at these technologies and really critique them and understand what's been said. And I think a healthy dose of cynicism is not going to do us any harm. So modernization, do you change the ethical definition? Kate, what's your thoughts on all this? Well, I love that Barry brought up the aviation industry because I think that right now we're kind of an industry in its infancy, but if we look at how other industries have evolved to deal with some thorny ethical issues, like for example, medicine. Medicine had to develop a whole code of ethics and develop a bunch of standards. If you look at aviation or other transportation industries, they've had to deal with a lot of things like public perception of what the technology can and can't do. And so you look at the growing pains that those industries have gone through and then you add in some modern insight about interdisciplinary, about diversity and tech development generally, getting people together who have different experiences, different life experiences when you're developing the technology. And I think we don't have to rebuild the wheel here. Yeah. Rajan, your thoughts on the path forward, throw it all away, rebuild. What do we do? Yeah, so I think this is a really interesting one because of all the technologies I've worked in within my career, everything from the internet to mobile to virtualization, this is probably the most powerful potential for human good out there. And AI is, you know, the potential of what it can do is greater than almost anything else that's out there. However, I do think that people's perception of what it's going to do is a little bit skewed. So when people think of AI, they think of self-driving cars and robots and things like that. And that's not the reality what AI is today. And so I think two things are important. One is to actually look at the reality of what AI is doing today and where it impacts people's lives. Like, how does it personalize customer interactions? How does it make things more efficient? How do we spot things that we never were able to spot before and start there and then apply the ethics that we've already known for years and years and years, but adapt it to a way that actually makes sense for this technology. Okay, looks like that's it for the around the cube. Looks like Talia, looks like Professor Barry, 11, third place, Kate, second place with 13, Rajan with 16 points, you're the winner. So you get the last word on the segment here. Share your final thoughts on this panel. Well, I think it's really important that we're having this conversation right now. I think about back to 1994 when the internet first started, people did not have that conversation nearly as much at that point and the internet has done some amazing things and there have been some bad side effects. I think with this, if we have this conversation now, we have this opportunity to shape this technology in a very, very positive way as we go forward. Thank you so much. And thanks to everyone for taking the time to come in. All the way from Cork, Ireland, Professor Barry O'Sullivan, Dr. Kate Darling doing some amazing research at MIT on robotics and human psychology and a new book coming out. Kate, thanks for coming on and Rajan, thanks for winning and sharing your thoughts. Thanks everyone for coming. This is Around the Cube here unpacking AI segment around ethics and human interaction and societal impact. I'm John Furrier with theCUBE. Thanks for watching.