 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at the Accenture Technology Visit in 2018. Actually the preview event, about 200 people, the actual report comes out in a couple of days. A lot of interesting conversations about what are the big trends in 2018 that Accenture surveyed Paul Doherty and team and we're really excited that this was a panel discussion to get into a little bit of the not technology but really the trusted ethics conversations. We're joined by Dr. Shannon Baller. She's a professor at Santa Clara University. Dr. Baller, great to see you. Great to be here, thank you. So you were just on the panel and of course there was a car guy on the panel so everybody loves to talk about cars and autonomous vehicles. He didn't give enough time. So we got a little bit more time, which is great. But one of the things that you brought up that I think was pretty interesting is really kind of a higher level view of what role technology plays in our life before. And you said before it was ancillary, it was a toy, it was a gimmick, it was a cool new car status symbol or whatever. But now technology is really defining who we are, what we do, how we interact, not only with the technology of other people, it's really taken such a much more fundamental role with a bunch more new challenges. Yeah, and fundamentally that means that these new technologies are helping to determine how our lives go. Not just whether we have the latest gadget or status symbol, previously as I said, we tended to take on technologies as ornaments to our life, as luxuries to enrich our life. Increasingly they are the medium through which we live our lives, right? They're the ways that we find the people we want to marry. They're the ways that we access resources, capital, healthcare, knowledge. They're the ways that we participate as citizens in a democracy. They are entering our bodies, they're entering our homes. And the level of trust that's required to really welcome technology in this way without ambivalence or fear it's a kind of trust that many technology companies weren't prepared to earn. Right, right. Because it goes much deeper than simply having to behave in a lawful manner or satisfy your shareholders, right? It means actually having to think about whether your technologies are helping people live better lives. And whether you're earning the trust that your marketing department, your engineers, your sales people are out there trying to get from your customers. And it's just really interesting when you talked about a refrigerator, I just love that example because most people would never let their next-door neighbor look into their refrigerator. Or their medicine cabinet, right? Or their medicine cabinet, right? And now you want to open that up to automatic replenishment. And what's interesting is I don't think a lot of companies came into the business with the idea that they were going to have this intimate relationship with their customers to a degree and a personal responsibility with that data. They just want to sell them some good stuff and move on to the next customer. So a very different mindset. Are they adjusting? How did legacy folks deal with this? Well, the good news is that there are a lot more conversations happening about technology and ethics within industry circles. And you even see large organizations coming together to try to lead in an effort to develop more ethical approaches to technology design and development. So for example, the big five leaders in AI have come together to form the partnership for AI and social good. And this is a really groundbreaking movement that could potentially lead other industry participants to say, hey, we have to kind of get on board with this. And we have to start thinking about what ethical leadership looks like for us as opposed to just a sort of PR kind of thing. Yeah, we throw the word ethics on a few websites or slides and then we're good. It has to go much deeper than that. And that's going to be a challenge. But it has to be at a level where rank and file workers and project managers have procedures that they know how to go through that involve ethical analysis, prediction and preparing ethical responses to failures or conflicts that might arise. Right, there's so many layers to this that we could go on for a long time but the autonomous band has kicked up. But you know, one of the things is, when you're collecting the data for a specific purpose and you put all the efficacy in as to why and how and what you're going to treat, what you don't know is how that data might be used by someone else next week, next year, 10 years from now. And you can't really know because there's maybe things that you aren't aware of. It's a very difficult challenge. And I think we just have to start thinking in terms of different kinds of metaphors. So data up until now has been seen as something that had value and very little risk associated with it. Now our attitudes are starting to shift and we're starting to understand that data carries not just value, not just the ability to be monetized but immense power and that power can be both constructive or destructive. Data is like jet fuel, right? It can do great things but you've got to store it carefully. You have to make sure that the people handling it are properly trained, that they know what can go wrong, that they've got safety regimes in place. No one who handles jet fuel treats it the way that some companies treat data like today. But today, data can cause disasters on a scale similar to a chemical explosion. They can, people can die, lives can be ruined and people can lose their life savings over a breach or a misuse of data that causes someone to be unjustly accused of fraud or a crime. So we have to start thinking about data as something much more powerful than we have in the past and have the responsibility to handle it appropriately. Right, but we're still so far away, right? We're still sending money to the Nigerian Prince who needs help getting out of the airport at Newark Airport. I mean, even just the social, the social factors still haven't caught up and then you've got this kind of whole API economy where so many apps are connected to so many apps so even where is the data? And that's before you get into like a plane flying over international borders while you send an email. I mean, the complexity is crazy. And we're never gonna get a handle on all of it. So one of the things I like to tell people is it's important not to let the perfect become the enemy of the good, right? So the idea is, yes, the problem is massive. Yes, it's incredibly complex. Can we address every possible risk? Can we forestall every possible disaster? No. Can we do much better than we're doing now? Absolutely. So I think the important thing is not to focus on how massive the problem or the complexities are, but think about how can we move forward from here to get ourselves in a better and more responsible position? And there's lots of ways to do that. Lots of companies are already leading the way in that direction. So I think there's so much progress to be made that we don't have to worry too much about the progress that we might never get around to make. Right, right. But then there's this other interesting thing that's going on that we've seen with kind of the whole fake news, right? Which is algorithms are determining what we see. And if you look at the ad tech model, it's kind of where the market has taken over the way that that operates. There's no people involved. So then you have things happen like what happened with YouTube, where advertisers, stuff is getting put into places where they don't want it, but there's really no people, there's no monitoring. So how do you see that kind of evolving? Because on one hand, you want more social responsibility and keep a track of things. On the other hand, so much as moving to software automation and giving people more of what they want, not necessarily what they need. Well, and that means that we have to do a much better job of investing in human intelligence. We have to, for every new form of artificial intelligence, we need an even more powerful provision of human intelligence to guide it, to provide oversight. So what I like to say is AI is not ready for solo flight. And a lot of people would like that to be the case because of course, you can save money if you can put an automated adjudication system in there and take the people out. But we've seen over and over again that that leads again and again to disaster and to huge reputational losses to companies, often huge legal liabilities. So we have to be able to get companies to understand that they are really protecting themselves and their long-term health if they invest in human expertise and human intelligence to support AI, to support data, to support all of the technologies that are giving these companies greater competitive advantage and profitability. But does the delta in the machine scale versus human scale just become unbearable or can we use the machine scale to filter out the relatively small number of things that need a person to get involved? I mean, how do you see kind of some best practices? Yeah, so the answer depends on the industry, depends upon the application. So there's no one-size-fits-all solution. But what we can often do is recognize that typically human and AI function best together, right? So we can figure out the ways in which the AI can amplify the human expertise and wisdom and the human expertise can fill in some of the gaps that still exist in artificial intelligence. Some of the things that AI's just don't see, just don't recognize, just aren't able to value or predict. And so when we figure out the ways that human and artificial intelligence can complement each other in a particular setting, then we can get the most reliable results and often the fairest and safest results. They might not always be most efficient from the narrow standpoint of speed and profit, right? So we have to be able to step back and say, at the end of the day, quality matters, trust matters. And just as if we put together a Shadi project on the cheap and put it out there, it's gonna come back to bite us, if we put Shadi AI in place of important human decisions that affect human lives, it's going to come back to bite us. So we need to invest in the human expertise and the human wisdom, which has that ethical insight to round out what AI still lacks. So do you think the execution of that trust building becomes the next great competitive advantage? I mean, nobody talks about that, right? Data's the new oil and blah, blah, blah, blah, and software-defined data-driven automation, but that's not necessarily only to the golden road, right? There's issues. So is trust, do you think, the next great competitive differentiator? Absolutely, I think in the long run, it will be. If you look at, for example, the way that companies like Facebook and Equifax have really damaged in pretty profound ways, the public perception of them as trustworthy actors in not just the corporate space, but in the political space for Facebook, in the economic space for Equifax, and we have to be able to recognize that those associations of a major company with that level of failure are really lasting, right? Those things don't get forgotten in one news cycle. So I think we have to recognize that today people don't know who to trust, right? It used to be that you could trust the big names, the big Fortune 500 companies, the blue chips, right? And then it was the little fly-by-night companies that you didn't really know whether you could trust and maybe you'd be more cautious in dealing with them. Now the public has no way of understanding which companies will genuinely fulfill the trust in the relationship that the customer gives them. And so there's a huge opportunity from a competitive standpoint for companies to step up and actually earn that trust and say, in a way that can be backed up by action and results, your data's safe with us, right? Your property's safe with us, your bank account is safe with us, your personal privacy is safe with us, your votes are safe with us, your news is safe with us, right? And that's the next step. But everyone is so cynical that, unfortunately Walter Cronkite is dead, right? We don't trust politicians anymore, we don't trust news anymore, we don't trust now more and more the company. So it's a really kind of rough world in the trust space. So do you see any kind of silver lining? How do we execute in this kind of crazy world where you just don't know? Well, what I'd like to say is that you have to be cautiously optimistic about this because society simply doesn't keep going without some level of trust, right? Markets depend on trust. Democracy depends on trust. Neighborhoods depend on trust, right? So either trust comes back into our lives at some deep level or everything falls apart. Frankly, those are the only choices. So if nature abhors a vacuum and right now we have a vacuum of trust, then there is a huge opportunity for people to start stepping into that space and filling that void. So I'd like to focus on the positive potential here rather than the worst case scenario, right? The worst case scenario is we keep going as things have been going and trust in our most important institutions continues to crumble. Well that just ends in societal collapse one way or the other. If we don't want to do that and I presume that if there's anything we can all agree on, right? It's that that's not where we want to go. Right. Then now is the time for companies if need be to come together and say we have to step into this space and create new trusted institutions and practices that will help stabilize society and drive progress in ways that aren't just reflected in GDP but are reflected in human wellbeing, happiness, a sense of security, a sense of hope, a sense that technology actually does give us a future that we want to be happy about moving into. Right, right. So I'll give you the last word. Sure. We'll fit in on a positive note. What are some examples of companies or practices that you see out there as kind of shining lights that other people should be either aware of, emulate or, you know, let's talk about the positive before we cut you loose. Well one thing that I mentioned already is the AI partnership that has come together with companies that really are leading the conversation along with a lot of other organizations like AI Now, which is an organization on the East Coast that's doing a lot of fantastic work. There are a lot of companies supporting research into ethical development, design, and implementation of new technologies. That's something we haven't seen before, right? This is something that's only happened in the last two or three years. It's an incredibly positive development. Now we just have to make sure that the recommendations that are developed by these groups are actually taken on board and implemented. And it'll be up to many of the industry leaders to set an example of how that can be done because they have the resources and the ability to lead in that way. I think one of the other things that we can look at is that people are starting to become less naive about technology. Perhaps the silver lining of the loss of trust is the ability of consumers to be a little wiser, a little more appropriately critical and skeptical, and to figure out ways that they can, in fact, protect their interests, that they can actually seek out and determine who earns their trust, where their data is safest. And so I'm optimistic that there will be a sort of meeting, if you will, of the public interest and the interests of technology developers who really need the public to be on board, right? You can't make a better world if society doesn't want to come along with you. So my hope is, and I'm cautiously optimistic about that, that these forces will come together and create a future for us that we actually want to move into. All right, good. I don't wanna leave on the side, no. Dr. Shannon Ballard, she's positive about the future. It's all about trust. Thanks for taking a few minutes. Thank you. I'm Jack Brick, she's Dr. Shannon. Thanks for watching, we'll catch you next time.