 I, from Las Vegas, Nevada, it's theCUBE. Covering IBM World of Watson 2016. Brought to you by IBM. Now, here are your hosts, John Furrier and Dave Vellante. Hey, welcome back, everyone. We're here live at the Mandalay Bay and Congeniton Center in Las Vegas for IBM's World of Watson. This is SiliconANGLES theCUBE, our flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier with my co-host, Dave Vellante, our next guest is Shannon Baller, who's the Professor of Philosophy at Santa Clara University. Welcome to theCUBE. Thank you. So you give a talk about, really, what we've been kind of teasing out, which is technology meets business value. That's kind of like what companies and technology companies talk about, but a new dimension of social impact is really coming into play here. Social justice, we did a lot last week at Grace Hopper, Women in Technology and Computing Celebration. And that was really more social justice. But again, social impact, social justice, however you call it, a new dimension of thinking is needed. Not so much philosophy or sociology, but it really is kind of that combined with the realities of how the tech is going to impact our lives. And you gave a talk. So you give us a quick overview of what you talked about and... Sure. So I mean, as you said, we're looking at a kind of technology that's going to have a social impact that's like nothing we've ever seen before. People say that all the time about new technologies, but in this case, it really is the case that artificial intelligence technology will transform how we live, how we work, how we think, how we act. And it will enter into virtually every institution and practice in our lives, into education, into politics, into medicine, into transportation, into any sector you can think of. There are applications for this technology. And AI will transform the balance of power in a lot of institutions. It'll have winners and losers. There will be people in different groups who experience this technology differently, who get different things out of it, and we have to be able to think in advance about a kind of future that at this point, we cannot possibly predict. So there are a lot of people in the media making predictions about where AI will take us. And the truth is, we just don't know, and to some extent can't know, and we have to prepare for a lot of different possibilities. Okay, so I'll make a prediction. The prediction is there is no prediction. Yeah. Basically. That's a good prediction, you're safe. You're good. We're a media company, so we have to make a, and something has to be dead. So something has to die. So no, it's disruption. Okay, we can come back to disruption, but you mentioned something around we don't know. Being ready is a readiness opportunity from a society standpoint, because the impacts could be from mental health, health to industrial transformation analog to digital. IOT kind of impacts that as well. So, and everything in between. That's right. So how do we do it? What does readiness mean? Readiness means starting now to think about the possibilities and putting systems in place that can respond quickly to unprecedented phenomena. We have to make our institutions more flexible. We have to make them more resilient. We have to build in capability upfront to respond to surprises. There will be surprises. Like what would be a surprise that you could imagine? I mean, hypothetically speaking, just kind of brainstorming. What would be an example of something profound or impactful that would be, with a spring into action on? Sure. Well, I mean, imagine for example, if you have AI systems that are involved in critical security operations or critical systems in US infrastructure, right? Now imagine that you have a system that is doing something that you didn't expect. And we know that deep learning algorithms can be statistically extremely reliable, but will occasionally do things that seem outside of the bounds of expectation or don't make sense to us. Now imagine that we then go in to try to pull the system offline because we want to. I'm prepared for what we're seeing. That's right. And then imagine if you have a system that has been developed without foresight about how a system might resist, not consciously, not because it wants to lock us out, right? But machine lockouts that might happen because systems are programmed to prevent interference in their operations, right? So how do you build a system that will keep hackers out but will let you in when you need to go in and regain control? So figuring out how to develop meaningful human control of AI systems is a huge issue. I mean, this whole notion of, you know, unpredictability is, I mean, I couldn't agree more. I was at an event, I would say four or five years ago, an event like this, and the speaker who was in the automobile industry said up and said, we will have autonomous vehicles within 25 years. Yeah. Maybe sooner. Well, I mean, I said, wow, 25 years is kind of a long way up. They're shipping beer yesterday. I was just saying. We're here, right? So that was not that long ago, maybe even three or four years ago, and now we're seeing them today. But there are some things that we do know, and I wonder if we could sort of talk about that. Machines have always replaced humans. But for the first time, it's cognitive tasks that are happening. But we had Chris Penn on earlier, and I said, what can humans do that machines can't? He said nothing. Now, I don't know. I want to ask you the same question. I'll push back on that a little bit. Yeah, so there's a list. Now, maybe the list is getting shorter. Maybe a few years ago, machine couldn't climb stairs. That's starting to change. So one of the things that humans can do that machines can't negotiate, I mean, maybe, I don't know. What are your thoughts on that? So one thing, and this is something that I spoke about earlier today, the systems that we have for artificial intelligence are task-specific. They know how to operate very well at well-defined tasks in well-defined contexts. What we don't have is anything like what's called artificial general intelligence, or AGI. And most computer researchers that I talk to and work with, most AI researchers don't think we're anywhere near being able to develop artificial systems that can understand all of the different contexts in which humans operate and connect them all together. So for the foreseeable future, humans will be able to do sort of cross-disciplinary, cross-context analysis where they can look at what's happening, let's say, in a medical context and understand its implications for public health, for politics, for education, and see how these different contexts relate to each other. Humans are also the only systems that have genuine capacity for understanding and wisdom. AI systems can represent knowledge and they can emulate robust intelligence, but they don't have wisdom. They can't see the whole and understand its meaning. For the foreseeable future, only humans will have the wisdom to determine the difference between good implementations of AI and destructive harmful implementations of AI. It brings up a good point. There was an article that our advisor to our fellowship tech truth that we just started and kicked off last week at Grace Hopper, John Markoff, New York Times wrote an article on September 1st on tech giants coming together to devise a plan for real ethics in artificial intelligence. Right up your wheelhouse, I'm sure you were all over this. But the thing is, sources haven't been announced yet, but the sources are saying that industry is going to kind of create an AI group. So it brings up the question of industry. Should they be doing it? Should it be regulated? So the balance between policy and industry, and the quote here is, the intention is clear to ensure that AI benefits people, not hurting them. But you can look at AI and say, it was an article again, yesterday, another special report in the New York Times about the defense, the drones thing. So all that's kind of like the backdrop, but Stanford has a study out, the 100 year study. Right. Okay, so the concern is, what is the format? How are the brains in the industry, both in academic, industry, and maybe government, thinking about how to organize the first, I guess straw man, if you will, around this? Is it the transparency? Should industry be leading it? Cause some will say that their profit motive might play into it. And the same thing applies to gene sequencing that we're seeing coming out of the crypto stuff. And so, I kind of like, we kind of don't know what we're doing right now. We don't, and we're figuring it out as we go. And the problem is, is that the technology is advancing faster and growing faster than our efforts to catch up and figure out how to manage it. So what we need to do is we need to actually work along all possible lines of response. So should industry be thinking about ethics? Absolutely. And what I'm gratified by is increasingly, I'm seeing that happen, right? Five years ago in the valley, if I wanted to approach someone in industry and talk about ethics and software, I may or may not get a positive response. Today, they're ready to have that conversation. But you can't only have it. Well, they understand that that train is coming down the tracks right at them. I mean, self-driving cars, Facebook.br, AR stuff. So it's clearly happening. It's happening, and there's no getting around it. And the public recognition of the power of this technology will impose requirements on industry to show that they can be trusted to implement this kind of technology, right? So trust is the huge key here. Industry may want profit, but they can't profit if they don't have the public trust, if they don't have consumer trust. So they're going to have to take steps to show that that trust is warranted. Now, that requires, however, a certain degree of transparency. And what we're seeing a lot in industry now is a lot of AI ethics boards that are behind closed doors. For obvious reasons, right? You don't want to be discussing the ethical implications. Because they're kind of riffing on it, right? In real time, right? That's a work in process. You don't want to... That's right, exactly. They haven't even built a sausage factory yet. So they're making, they're slaughtering the meat right now, it's early days. That's right, but at some point, you have to open up those processes. Otherwise, they don't cultivate the kind of public trust that you need in order for this technology to be embraced. And government can help with that, but government can't do it all for a lot of reasons. It's a classic innovation dilemma. Do you get regulation up front set up, or are you foreclosing innovation with that? Or do you get organic play? I suggest we avoid false dilemmas, right? It doesn't have to be either or it doesn't have to be solutions of all one sort. We need smart regulation, but we don't need regulation that's excessively burdensome or premature. So we have to make smart decisions about what needs to be regulated now, what needs to be just studied and observed. And it depends on the context. When you're talking about life and death, context, right? You might need regulation sooner than in a context where you can afford to let things play out. So you need to be monitoring this big time. So step one is get the data, right? Tell the stories that need to be told, get the truth, the tech truth out of this. How do people get involved? I mean, this is like one thing I see this, I mean, we're hearing a lot of surround sound on this topic, glad you came on because we're constantly doing these cube events and talking to smart people at Stanford, MIT and other places. And there's a huge younger generation that really want to get involved. Absolutely. How would you, what's out there? What's the, where do they go? Do they go contact you? Do these are sites? Yeah, absolutely. So what's happening now is that everyone's sort of realizing more or less at the same time how important these issues are. And so you're seeing a lot of groups and a lot of organizations popping up in parallel in different parts of the world and different industries and different academic contexts. And so there's lots of places to go. There is no yet sort of central clearing house or organization that is responsible for coordinating all these efforts. Maybe that's not a bad thing, right, for now. Maybe we do need to sort of let a thousand flowers bloom and sort of see what happens. But there are organizations, for example, I'm on the executive board of a group called the Foundation for Responsible Robotics. It's a new nonprofit just launched this year, rooted in the Netherlands, but it's an international organization that seeks to bring ethicists together with roboticists and people in public policy to try to figure out how to develop new robotics technologies, many of which implement artificial intelligence in a way that's responsible. And so there are groups that are looking to make these kinds of new connections that haven't existed before. You know, it's interesting, we have the IBMers on and I want to look at your take on the show in a second as we kind of wrap up. They always talk about we're bringing Agile in, we're going to do a design-centric total customer experience design in the front end of the product, nothing gets released until it, that's cool, I get that, but now why not have an ethics impact? How would we integrate that concept because that would be an interesting innovation to say, okay, user experience is how they're going to be engaging with software, everyone's going to have software, now the impact on the social side becomes hugely interesting. Absolutely, and so I think what we have to do is get people outside of academia and even within academia, people outside of moral philosophy comfortable talking about ethics and understanding how to do it in a way that's constructive, that's practical, that doesn't get to a level of abstraction that's not really helpful. I mean, the tech industry, let's be frank, I mean, prior to this new innovation surge of digital transformation, analog means digital, full digitization, you know, someone brings up ethics, they're like, oh yeah, I got dated stock options, I mean, all kinds of, I don't want anyone coming into my camp, the doors were kind of closed if you will. That's right, yes. And it was kind of a show pony out front. Yes, that's right. To kind of like, no, we're ethical, there's no real impact there, because there's software. That's right. It's on a desktop, it's like shrink rack. That's right. Now it's changed completely. Does the impact of, we were talking about earlier, machines replacing humans, does the impact of employment bleed into your scope? Of course, that's a huge ethical issue. In public policy, might look at it as a zero sum game, as you were saying before. Yeah. Did you buy the sort of Bryn Jolson McAfee scenario that the middle is getting hollowed out, that the superstars are doing great, and the data supports that, and the low end is doing just fine, you know? Well, we talked a little bit earlier about what's left for humans to do as AI moves into cognitive tasks. And one of the things that's happening is the work that's going to be left for humans to do is going to require, for many people, a kind of technical education that right now, we don't have the capacity built up to provide to 100 million people, right? So if you think about all of the people in sort of that middle sector, and even people, I mean, think about all the truck drivers who are going to be unemployed. Think of all the people in the service industry who may be unemployed by the implementation of these technologies. The history says, okay, well, new jobs will be created, but those new jobs are going to require a level of skill that we're not prepared yet to meet with our workforce. So we have to start thinking about that challenge right now. We're creativity and the ability to combine knowledge of different disciplines is going to be the scarce commodity. Creativity, interdisciplinarity, social intelligence, and moral wisdom, these are the things that will be the most important skills in the market going forward. Well, education, I mean, what implications does that have for the way that we teach children, well, this came up at Grace Hopper too, also I'll add coding in there because there's an element of understanding coding, not being a software developer, but folks coming out of high school, no coding experience, but yet the younger generations, Rebecca's daughter, she's seven, she's hacking around with Python. Like, that's unbelievable. It's astonishing. You get them young coding. But in terms of education, I'm glad you brought that up because in a sense, if you look at what I just said, and then you look at what our education system is doing right now, we're doing it wrong. Yeah, it's a real mismatch. Yeah, so we gotta go back, I mean, so this kind of narrow education where we're teaching to the test and where we're not encouraging students to make connections between different areas of knowledge, we are setting up this generation to fail, and we need to go deep now into our institutions, and education I think is one of the most critical ones we have to think about. How do we make this education prepare this generation for the immense challenges that they are gonna face? And the immense opportunities. And digital is an opportunity too. It's not like we're just putting courseware online. It's really thinking differently around the digital assets that might be available for someone to learn. That's right. And integrating technical and social intelligence. And this is where public policy can play a role, although there's a lot of friction, you know? Yes. And right now, we don't have political institutions that are working particularly well in case anyone has noticed, right? And so trying to make these big changes to the way we prepare our young people to enter into the workforce, we right now aren't prepared to put those kinds of changes in place as quickly as we need to, and with the level of public cooperation and political cooperation that we need. And what I can only hope is that we will have a new phase of political and civic virtue that will recognize that the challenges ahead and the opportunities are too great to allow our differences to hold back the changes that need to happen. Well, I mean, just hearing this conversation, you know, your point about what's the role of public policy, it feels like it should be substantial. Yes. And it's an opportunity that really can't be missed. Absolutely. Yeah, but without putting too much wet blanket on this engine of innovation. But tax policy and investment policy, education, I mean, there's a lot of things that. So this is one of the key points that I wanted to emphasize in the talk that I gave is that ethics and innovation are not antagonists. We have been using technologies, our inventions, to make better lives for ourselves since the dawn of humanity. There is no good life without technology and there is no good life without invention. Okay, so ethics is about the good life and the good life requires innovation. But not every innovation makes life better. The whole point is innovation that actually drives human flourishing rather than impoverishes us. And that's what the New York Times is kind of getting at in the middle of their story, is that, yeah, we get AI, it should help people. Yeah. But it might not. Yeah, so the point is to think about what's wise innovation? Wise that's just not clever but actually helps humans live better. The whole idea of technology is it supposed to create a better world? Well, let's ask the questions we need to ask to make sure that that's actually what's happening. This is a great tech truth topic, Dave. So we should have to drill down on this. Shannon, thanks so much for coming on. I want to get your final thoughts on, obviously we're at World of Watson IBM event. Watson is the poster child for, you know, this kind of encapsulating AI. And obviously the Jeopardy, everyone sees that, people who actually know what Jeopardy is, the younger generation might not watch TV, they cut the cord years ago. But it's a face, it's an individual benefit. Your thoughts on what IBM's doing here and some of the things you're seeing here. Yeah, absolutely. Let me just say that the thing that I appreciate most about IBM Watson and the way that they frame the power of Watson is that the context is presented as an amplification of human intelligence and wisdom, not a replacement for it, right? So IBM Watson talks a lot about what Watson is but also what Watson isn't. And that can sometimes get missed in the talk about AI and the hype about AI. So Watson is a technology that can help us think better, do better, think more, do more. Something that can help humans solve problems rather than take our problems away from us. It's like your example about, you know, ethics and technology, they go hand in hand. It's not a mutually exclusive scenario. And humans are problem solvers. This is how we develop our capacities. This is how we experience the world in a way that's rewarding and worth living. And we need to have technologies that help us solve problems. So you're an optimist into the day? I'm actually, I am, I have to be, right? Otherwise what's the point of doing it, right? Well, you have a good Joseph cynicism because you have to understand it. I'm a careful optimist. Yeah. Well, the human point about us being problem solvers is Mary Glacklin on, she's a Senior VP of Science, Forecasts and Weather Company. She goes way back into the field of weather and oceanology. And she was talking about the ozone's actually getting smaller. So with data, we're kind of solving that. It's like the world is going to crash. Now climate change is another one. With the right problem solving, that could be the betterment of the situation. Absolutely, and there's a tremendous potential for this technology to help us solve problems that were just too big and too complex for human cognitive bandwidth to handle. And we need to seize those opportunities to allow the flexibility of this technology to develop new kinds of solutions that humans simply couldn't have thought of. But it's up to us ultimately to validate those solutions, to decide that they're wise to implement, to look at the broader value context. And that's the job that we won't lose to AI. Shannon Valer, thanks so much for sharing your insight. Great perspective, great conversation we could have gotten in an hour on this. It's phenomenal, and I'm glad we got it and captured it live and also it'll be on YouTube. Professor of Philosophy at Santa Clara University. Shannon Valer.net is her website if you want to reach out and touch her. It's two N's and two L's. Shannon Valer.net, thanks for sharing. Thanks very much for having me. And we're going to hit you up in Silicon Valley because we want to definitely do a drill down on the tech truth here, so appreciate it. Outstanding. Okay, this is theCUBE, we're live at Mandalay Bay, getting ready for the big Gini Romney keynote at one. We're going to be bringing you live coverage up until then. I'm John Furrier, Dave Vellante. Thanks for watching, right back.