 From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Okay, welcome everyone to the special CUBE Conversation around the CUBE segment, Unpacking AI number two, sponsored by Juniper Networks. We got a great lineup here to go around the CUBE and Unpack AI, we have Ken Jennings, all-time Jeopardy champion with us. Celebrity, great story there, we'll dig into that. John Hinson, director of AI at Evotech and Sharna Park, he was the applied scientist at Text.io. Thanks for joining us here for Around the CUBE, Unpacking AI, appreciate it. First question, I want to get to Ken, you're notable for being beaten by a machine on Jeopardy, everyone knows that story, but it really brings out the question of AI and the role of AI is playing in society around obsolescence. And we've been hearing gloom and doom around AI replacing people's jobs. And it's really not really that way. What's your take on AI and replacing people's jobs? You know, I'm not an economist, so I can't speak to how easy it's going to be to retrain and reskill just tens of millions of people once these clerical and food prep and driving and whatever jobs go away. But I can definitely speak to the personal feeling of being in that situation, kind of watching the machine take your job on the assembly line and realizing that the thing you thought made you special no longer exists. If IBM throws enough money at it, your skill essentially is now obsolete. And it was kind of a disconcerting feeling. I think that what people need is to feel like they matter. And that went away for me very quickly when I realized that a black rectangle could now beat me at a game show. Okay, John, what's your take on AI replacing jobs? What's your view on this? Yeah, so I think, you know, look, we're all going to have to adapt. There's a lot of changes coming. There's changes coming socially, economically, politically. I think it's a disservice to us all to get too indulgent around the idea that these things are going to change. We have to absorb these things. We have to be really smart about how we approach them. We have to be very open-minded about how these things are going to actually change us all. But ultimately, I think it's going to be positive at the end of the day. It's definitely going to be a little rough for a couple of years as we make all these adjustments. But I think what AI brings to the table is heads above kind of where we are today. Sharne, your take around this, because the role of humans versus machines are pretty significant. They help each other. But is AI going to dominate over humans? Yeah, absolutely. I think there's a thing that we see over and over again in every bubble and collapse, where in the automotive industry, we certainly saw a bunch of jobs were lost, but a bunch of jobs were gained. And so we're just now actually getting into the phase where people are realizing that AI isn't just replacement. It has to be augmentation, right? We can't simply use images to replace recognition of people. We can't just use a black box to give our FICO credit scores. It has to be inspectable. So there's a new field coming up now called explainable AI that actually is where we're moving towards. And that's actually going to help society and create jobs. All right, so let's stay on that next point for the next round. Explainable AI, this points to a golden age. There's a debate around, are we in a bubble or golden age? A lot of people are negative right now on tech. You can see all the tech backlash, Amazon, the big tech companies like Apple and Facebook. I mean, there's a huge backlash around this kind of so-called tech for society. And is this a indicator of a golden age coming? I think so, absolutely. Like we can take two examples of this. One would be where you remember when Amazon built a hiring algorithm based upon their own resume data and they found that it was discriminating against women because they had only had men of five or more. But now with Textio, we're building augmented writing across the audience and not from a single company. And so companies like Johnson & Johnson are increasing the pipeline by more than 9%, which converts to 90,000 more women applying for their jobs. And so part of the difference there is one is explainable, one isn't. And one is using the right data set from representing the audience that is consuming it and not a single company is hiring. So I think we're absolutely headed into more of a golden age. And I think these are some of the signs that people are starting to use it in the right way. John, what's your take? Obviously, golden age doesn't look like that to us right now. You see Facebook approving lies as ads, Twitter banning political ads. AI was supposed to solve all these problems. Is there light at the end of this dark tunnel we're on? Yeah, I mean, golden age for sure. I'm definitely a big believer in that. I mean, I think there's a new era amongst this on how we handle data in general. I think the most important thing we have here though is education around what this stuff is, how it works, how it's affecting our lives individually and at the corporate level. This is a new era of informing and augmenting literally everything we do. I see nothing but positives coming out of this. We have to be obviously very careful how we're approaching all the biases that already exist today that are only going to be magnified with these types of algorithms at mass scale. But ultimately, if we can get over that hurdle, which I believe collectively we all need to do together, I think we live in a much better, less wasteful world. Just by approaching the data that's already at hand. Ken, what's your take on this? And it's like a daily double question. Is it going to be a golden age? Is it going to come sooner or later? We have to have catastrophe before we have to have reality hit us in the face before we realize that tech is good in shaping it. I mean, it's pretty ugly right now in some of the situations out there, especially in the political scene with the election in the US. You're seeing some negative things happening. What's your take on this? I'm much more skeptical than John and Chardon. I feel like that kind of just blinkered. It's going to be great. It's something you have to actually be in the tech industry and hearing all day to actually believe. I remember seeing kind of lay person's exposure to Watson when Watson was on Jeopardy and hearing the questions reporters would ask and seeing the means that would appear. And everyone's immediate reaction just to something as innocuous as a AI algorithm playing on a game show was to ask, is this guy net from Terminator 2? Is this the computer from the matrix? Is this Hal pushing us out of the airlock? Everybody immediately first goes to, the tech is going to kill us. That's like everybody's first reaction and it's weird. I don't know, you might say it's just because Hollywood has trained us to expect that plot development. But I almost think it's the other way around. Like that's a story we tell because we're deeply worried about our own meaning and obsolescence when we see like how little these skills might be valued in 10, 20, 30 years. I can't tell you how much, by the way, Star Trek, Star Wars and Terminators probably infected the nomenclature of the technology. Everyone references Skynet. Oh my God, we're going to be taken over and killed by aliens and machines. This is a real fear. I think it's an initial reaction. You felt that, Ken. So I got to ask you, where do you think the crossover point is for people to internalize the benefits of say AI for instance? Because, you know, the people will say, hey, look back at life before the iPhone. Look at life before these tools are out there. Some will say it's slightly gotten better, but yet there's the surveillance culture, things and on and on. So what do you think, guys, think the crossover point is for the reaction to change from oh my God, it's Skynet, gloom and doom to this actually could be good. It's incredibly tricky because, you know, as we've seen the perception of AI both in and out of the industry changes as AI advances. You know, as soon as machine learning can actually do a task, there's a tendency to say, oh, this is a, this is no true Scotsman problem where we say, oh, well that clearly can't be AI because, you know, I see how the trick worked and yeah, humans lose at chest now. You know, so when these small advances happen, the reaction is often, oh, that's not really AI. And by the same token, you know, people, it's not a game changer when your email client can start to autocomplete your emails. You know, that's a minor convenience to you that you don't think, oh, you know, maybe Skynet is good. Like I really do think it's going to have to be, I mean, maybe the inflection point is when it starts to become so disruptive that actually public policy has to change, that we get serious about whatever the reactions are. Sharon, your thoughts. Yeah, so the public policy has started changing though. We just saw, I think it was in September where California banned the use of AI in the body cameras, both real time and after the fact. So I think that's part of the pivot point that we're actually seeing is that public policy is changing. The state of Washington currently has a task force for AI who's making a set of recommendations for policy starting in December. But I think part of what we're missing is that we don't have enough digital natives in office to even attempt to, to your point, can predict what we're even going to be able to do with it. Right? Like there is this fear because of misunderstanding, but we also don't have a respect of our political climate right now by a lot of our digital natives. And they need to be there to be making this policy. John, weigh in on this because, you know, you're director of AI, you're seeing positive, you have to deal with the uncertainty as well. The growth of machine learning, and just this week, Google announced more TensorFlow for everybody, you're seeing open source. So there's a tech push, almost a democratization going on with AI. So, you know, I think this crossover point it might be sooner in front of us than people think. What's your thoughts? Yeah, I mean, it's here right now. I mean, all these things can be essentially put into an environment. You can see these into products or making business decisions or political decisions. These are all available right now. They're available today and it's within 10 to 15 lines of code. It's all about the data sets. So you have to be really good stewards of the data that you're using to train your models. But I think the most important thing back to the SkyNet and all this science fiction side, we have to collectively start telling the right stories. We need better stories than just this, the world. The robots are going to take us over and destroy all of our jobs. I mean, I think more interesting stories really resolve around. What about public defenders who can have this inform and augmentation algorithm that's going to help them get their job done? What about tailor-made medicine that's going to tell me exactly what the conditions are based off of a particular treatment plan instead of guessing? What about tailored education that's going to look at all of my strength and weaknesses and present the plan for me? These are things that AI can do. Charn is exactly right where if we don't get this into the right political atmosphere that's helping balance the capital aside with the social side, we're going to be in trouble. So that's got to be embedded in every layer of an enterprise as well as a society in general. But it's here, it's now and it's real. Ken, before we move on to the ethics question, I want to get your thoughts on this because we have an Alexa at home. We had an Alexa at home. My wife made me get rid of it. We had an Apple device, what they call home pods. That's gone. I bought a portal from Facebook because I always buy the early stuff. That's gone. We don't want listening devices in our house because in order to get that AI, you have to give up listening and this has been an issue for, what do you have to give to get? This has been a big question. What's your thoughts on all this? Yeah, I was at an Amazon event where they were trumpeting how no technology had ever caught on faster than these personal digital assistance. And yet every time I'm in a, like a use case, a household that's trying to use them, something goes terribly wrong. My friend had to rename his because the neighbor kids kept telling Alexa to do awful things. So he renamed it computer. And now every time we use the word computer, the wall tells us something we don't want to know. You know, I mean, this is just amic data, but maybe it speaks to something deeper, the fact that, you know, we don't necessarily like the feeling of surveilled, of being surveilled, you know, that IBM was always trying to push Watson as the Star Trek computer that hopefully tells you exactly what you need to know in the right moment. But that's got downsides too. I mean, I feel like we're going to, if nothing else, we may start to value individual learning and knowledge less when we feel like a voice from the ceiling can deliver unto us the fact that we need. Like, I think decision-making might suffer in that kind of a world. All right, that springs up ethics because I bring up the Amazon and the voice stuff because this is the new interface people want to have with machines. And I didn't mention phones, androids and Apple. You need to listen in order to make decisions. This brings up the ethics question around, who sets the laws? What society should do about this? Because if you want the benefits of AI, John, you pointed out, some of them you got to give to get. Where are we on ethics? What's the opinion? What's the current view on this, John? We'll start with you on your ethics view on what needs to change now to move the ball faster? Sure, so data is gold. And data is gold at an exponential rate when you're talking about AI. There should be no situation where these companies get to collect data at no cost or no benefit to the end consumer. So ultimately we should have the option to opt out of any of these products and any of this type of surveillance wherever we can. Public safety is a little bit different situation, but on the commercial side, there's a lot of more expensive and more difficult ways to train these models with the data set that isn't just basically grabbing everything out of your personal lives. I think that should be an option for consumers. And that's one of those ethical check marks. Again, ethics in general, I mean, the way that data is trained, the way that data is handled, the way models actually work, it has to be a primary reason for an approach of how you actually go about developing and delivering AI. That said, we cannot get overindulged in the fact that we can't do it because we're so fearful of the ethical outcomes. We have to find some middle ground and we have to find it quickly and collectively. Sharon, what's your take on this? I mean, ethics is super important to set the agenda for society to take advantage of all this. Yeah, I think we've got three ethical components here. We certainly have, as John mentioned, the data sets. However, it's also what behavior we're trying to change. So I believe the industry could benefit from a lot more behavioral science so that we can understand whether or not the algorithms that we're building are changing behaviors that we actually want to change. And if we aren't, that's unethical. There's an entire field of ethics that needs to start getting put into our companies. We need an ethics board internally. A few companies are doing this already, actually. I know a lot of the military companies do, I used to be in the defense industry. And so they've got a board of ethics before you do your things. The challenge is also, though, that as we're democratizing the algorithms themselves, people don't understand that you can't just get a set of data that represents the population. So this is true of image processing where if we only used 100 images of a black woman and we used 1,000 images of a white man because that was the distribution in our population, and then the algorithm could not detect the difference between skin tones for people of color, then we end up with situations where we end up in a police state where you put in an image of one black woman and it looks like 10 of them and you can't distinguish between them. And yet the confidence rate for the humans are actually higher because they now have a machine backing their decision. And so they stop questioning to your point, Ken, about what is the decision I'm making? They're like, I'm so confident this data told me so. And so there's a little bit of you need some expert in the loop and you also, you can't just have experts because then you end up with Cambridge Analytica and all of the political things that happened there, not just in the US, but across 200 different elections and 30 different countries. And we're upset because it happened in the US, but this has been happening for years. So it's just this ethical challenge of behavior change. It's not even AI and we do it all the time. It's why the cigarette industry is regulated. So Ken, what's your take on this? Obviously because society needs to have ethics. Who runs that? Companies, they're law makers. This is, someone's got to be responsible. I'm honestly a little pessimistic that the general public will even demand this the way we're maybe hoping that they will. When I think about an example like Facebook, people just being willing to give away insane amounts of data to social media companies for the smallest of benefit. Keep in touch with people from high school, they don't like. I mean, it really shows how little we value not being the product in this kind of situation. But I would like to see this kind of ethical decisions being made at the company level. I feel like Google kind of sort of pitchlessly moved away from its little don't be evil mantra with the subtext that maybe we'll be a little evil now. It just reminds me of kind of Manhattan Project era thinking where you could have gone to any of these nuclear scientists instead. You're working on a real interesting puzzle here, but it might advance the field, but like 200,000 civilians might die this summer. And I feel like they would have just looked at you and thought, that's not really my daily life. I'm just trying to solve the fission problem. And I would like to see these 10 companies actually having that kind of thinking internally, not being so busy thinking if they can do something that they don't wonder if they should. That's a great point. This brings up the point of who's responsible. They're almost as size of who's less evil than the other person. Google's, they don't do evil, but they're less evil than Amazon and Facebook and others. Who's responsible? The companies or the lawmakers? Because if you look at some of the hearings in Washington DC, some of the lawmakers that we see up there, they don't know how the internet works. And pretty obvious that this is a problem. That's why Jack Dorsey of Twitter posted yesterday that he banned not just political ads, but also issue ads, right? So this isn't something that they're making him do, but he understands that when you're using AI to target people that it's not okay. And so at some point, while Mark is sitting on this committee and giving his testimony, he's essentially asking to be regulated because he can't regulate himself. He's like, well, everyone's doing it, so I'm going to do it too. That's not an okay excuse. We see this in the labor market though, actually, where there's existing laws that prevent discrimination, right? And so it's actually the company's responsibility to make sure that the products that they purchase from any vendor isn't introducing discrimination into their process. So it's not even the vendor that's held responsible. It's the company and their use of it. We saw in the NYPD, actually, that one of those image recognition systems came up and someone said, well, he looked like, I forget what the name of the actor was, but some actor's name is what the perpetrator looked like. And so they used an image of the actor to try and find the person who actually assaulted someone else. And that's also the user problem that I'm super concerned about. So John, what's your take on this? Because these companies are in business to make money for a for-profit. They're not the government. And who's the role, what's the government do? AI has to move forward. Yeah, so we're all responsible. I mean, the companies are responsible. The companies that we work with, I've yet to interact with customers or what are our customers doing? They're all have some insidious goal that they're trying to outsmart their customers. They're not. Everyone's looking to do the best and deliver the most relevant products in the marketplace. The government, they absolutely, the political structure we have, it has to be really intelligent and it's gotta get upskilled in this space and it needs to do it quickly, both at the economy level as well as for our defense. But the individuals, all of us at individuals, like we are already subjected to this type of artificial intelligence in our everyday lives. I mean, look at streaming, streaming media. Right now, every single one of us goes out through a streaming source and we're giving recommendations on what we should watch next. And we're already adapting to these things. I am, I'm like, stop showing me all the stuff you know I wanna watch. That's not interesting to me. I wanna find something I don't know I wanna watch, right? So we have to get educated. We're all responsible for these things. And again, I see a much more positive side of this. I'm not trying to get into the fear-mongering side of all the things that can go wrong. I wanna focus on the good stories, the positive stories. If I'm in a courtroom and I lose a court case because I couldn't afford the best attorney and I have the bias of a judge, I would certainly like artificial intelligence to make a determination that allows me to drive an appeal as one example. Things like that are really creative in the world that we need to do. Tampering down this wild speculation we have in the markets. I mean, we're all victims of really bad data decisions right now. Almost the worst data decisions. For me, I see this as a way to actually improve all those things. Broad fees will be reduced. That helps everybody, right? Less speculation on these wild swings. These are all helpful things. Well, Ken, John and Charna, thanks. Go ahead, finish to get that word in. Sorry, I think that point you were making though, John, is we are still a capitalist society but we're no longer a shareholder capitalist society. We are a stakeholder capitalist society and the stakeholders is the society itself. It is us. It is what we want to see. And so, yes, I still want money. Obviously, there are things that I want to buy but I also care about well-being. And so, I think it's that little shift that we're seeing that it's actually you and I holding our own teams accountable for what they do. Yeah, culture first is a whole new shift going on in these companies that's a for-profit mission-based. Ken, John, Charna, thanks for coming on around theCUBE, Unpacking AI. Let's go around theCUBE Ken, John and Charna in that order and just, will quickly, Unpacking AI, what's your final word? I really, you know, I'm interested in John's take that there is a democratization coming through AI that these tools will be available to everyone. I would certainly love to believe that. It seems like in the past, we've seen, no, that access to these kind of powerful paradigm changing tools tend to be concentrated among a very small group of people and the benefits accrue to a very small group of people. But I hope that doesn't happen here. You know, I'm optimistic as well. I like the utopian side where we all have this amazing access to information and so many new problems can get solved with amazing amounts of data that we never could have touched before. So, you know, I think about that. I try to let that help me sleep at night and not the fact that, you know, every public figure I see on TV is kind of out of touch about technology and only one candidate suggests the universal basic income and it's kind of a crackpot idea. You know, those are the kind of things that keep me up night. All right, Sean, final word. Yeah, I think it's beautiful. AI is beautiful. We're on the cusp of a whole new world. It's nothing but positivity. I see we have to be careful. We're all nervous about it. None of us know how to approach these things but if human beings, we've been here before. We're here all the time and I believe that we can all collectively get a better lives for ourselves, for the environment, for everything that's out there. It's here, it's now, it's definitely real. I encourage everyone to hurry up on their own education, every company, every layer of government to start really embracing these things and start paying attention. It's catching us all a little bit by surprise but once you see it in production, you see it real, you'll be impressed. Okay, Sean, final word. Yeah, I think one thing I want to leave people with is what we incentivize is what we end up optimizing for. This is the same with human behavior, you're training a new employee, you put incentives on the way that they sell and that's, they game the system. AI's specifically find the optimal route that is their job. And so if we don't understand more complex cost functions, more complex representative ways of training, we're gonna end up in a space before we know it that we can't get out of. And especially if we're using uninspectable AI, we really need to move towards augmentation. There are some companies that are implementing this now that you may not even know. So Zello, for example, is using AI to give you a cost for your home just by the photos and the words that you describe it. But they're also purchasing houses without a human in the loop in certain markets based upon an inspection later by a human. And so there are these big bets that we're making within these massive corporations, but if you're going to do it as an individual, take a Coursera class on AI and take a Coursera class on ethics so that you can understand what the pitfalls are going to be because that cost function is incredibly important. Okay, that's a wrap. Looks like we have a winner here. Sharna, you got 18, John 16, Ken came in with 12, beating again. Okay, Ken, seriously, great to have you guys on. Pleasure to meet everyone. Thanks for sharing around theCUBE, unpacking AI panel number two. Thank you. Thank you. Thanks. I've been defeated by artificial intelligence again. Ha ha ha. Ha ha ha.