 Live from Madrid, Spain. It's theCUBE. Covering HPE Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. Ball's off, just put it back on. Hi everybody, welcome back to Madrid. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my co-host, Peter Burris, day two of HPE Discover Madrid 2017. Bina Aminoff is here, she's the Global Vice President of Big Data AI and New Tech Innovation at Hewlett Packard Enterprise. Bina, welcome to theCUBE, it's great to have you on. First time on theCUBE, right? Yes, thank you Dave, thank you Peter, I'm glad to be here. No, you're very welcome. So, let's talk about what Hewlett Packard Enterprise is doing in AI. You're new to the company, they brought you in. Why did Hewlett Packard tap your expertise? I think a lot of it is based on my previous experience and honestly, there is so much buzz going on with AI and the hype around it, right? There is so much that we need to do with AI, there's so much potential and we're not tapping into it as much as we should. That was one of the big reasons and especially what Hewlett Packard Enterprise is doing now, as we're going through this transformation, we can help our customers start on their AI journey, help them build out end-to-end solutions with AI, which is going to be one of my biggest charters. Well, when we were young and started in this business, AI was the buzz in the early to mid-80s. And that was the fifth or sixth time around with AI. Oh yeah, yeah. That was 40 years ago. And it just obviously died, the processing power wasn't there and I guess the data really wasn't there. Is that, why AI, why now? Yeah, so, and I'll date myself here. When I was doing my undergrad and post-grad, we had AI as one of the courses and nobody wanted to do it because it was considered this very futuristic thing, never going to happen, self-driving cars. Oh, personalized ads, even that was considered so hypothetical because we didn't have the compute, we didn't have the processing power, we didn't have the amount of data accessible to us. The acquisition of data was harder, the compute power wasn't there, so it was just always a science project. It was always a science project, it was a research, it was more ideas and it wasn't doable. But today, with the advances we've seen with cheap storage, easy access to compute, the whole game has changed. Lot of things we could only dream about is now becoming real, we are able to experiment more. And speaking to what you were saying earlier, AI has been through this hype cycle several times. If you think back, AI, the term itself was coined in 1956, and then we've seen those hype cycles when there is massive investment and there's nothing delivered, then it rains down so the AI winters keep happening. And now I think it's a gain on a rise but this time we are actually seeing results. We're seeing self-driving cars, we've seen personalized marketing taken to a whole new level, we've seen drones making delivery. But if you think about it, when you start in the business, you've seen about AI too, right? It's still the narrow intelligence part, right? It's not that super intelligence or general intelligence that we've reached that scale that we've reached out to. And I think given what I know about the analytic techniques available today or even the compute powers available today, we're still going to be dabbling around in narrow intelligence for at least the next few years before we expand out to the next level. Okay, so that raises an interesting issue because I first heard about AI back in the 70s reading Flagenbaum's fifth generation systems book, which by then what they were talking about, multiple generations of AI that had supposedly already happened. But AI has, for technical reasons, for technology, for data acquisitions, has disappointed. Now it's not disappointing, but there's still this perception of how much change is coming and the impact of the change. And let's talk about the people side of this because the success of AI is going to be very closely tied to whether or not social groups abandon it because it doesn't deliver what was expected or the impacts are greater in ways that weren't anticipated. What's the people side of this change? It's the innovation, the social change side. So I like to look back at history. History always gives us an indication of where technology is taking us. And if you look back at the early 19th century, when the early 20th century, when the steam engine was invented, what did it do? It enabled humans to expand their physical abilities to move things, to drive things forward. So it was increasing the human muscle power and that whole industrial revolution that happened around that time with steam engine and the automation of a lot of work that was being done by humans manually. And we're seeing a similar revolution happening now because it's fundamentally changing how we work, how economies are made and that causes a lot of fear and insecurities and who knows, our jobs might be replaced or changed over the next few years. We don't know because this technology is coming at us very fast. The reason is because there are so many companies investing so heavily in AI. What that makes us do is it accelerates the development of the technology. It comes at us smarter and faster and we're not prepared for it. Like if you look back at our own lives, right? I'm talking about a time when I was in my 20s and just thinking about AI and it was mythical and futuristic. And now today there are self-driving cars. It's happening in our lifetime where things have changed so rapidly and we don't know what it's going to look like 20 years from now. The piece that I'm optimistic about is unlike, you know, there are a number of luminaries who are spelling doom of mankind and elimination of human race and jobs and so much more. For me, it seems like, look, at the end of the day, we are building AI. We have the power to shape it the way we want. The fear exists because there is so much unknown and it is also because there's a select few group of people who are shaping AI, right? So how do we actually get more people involved? How do we truly democratize AI so that we get different viewpoints? Like should a computer scientist be building an AI product in isolation without full partnership from a lawyer or from similar domain products, domain experts have to be involved and today that's not happening. So we don't, you know, if you're building, and I stick to legal just because something that I can relate to is if a lawyer is actively involved in building out an AI legal product, he or she can know all the checks and balances we need to put in place so that AI doesn't go rogue, right? When a pure computer science person is driving that product and building the product, he or she may not be aware of all the checks and balances, right? And we may not put the right guardrails in place to prevent that program from growing rogue. At the end of the day, AI is something that we own and we should be able to build it in a way with the right guardrails in place. And if you, you know, look at, you know, at the end of, we are all so dependent on our phones and what is that? That is AI today, but we are not afraid of it. We use it, we leverage it. And that's how I think, you know, AI will be 20, 30 years from now is really helping us extend our brain power, right? Remove the monotony, monotonous tasks we have to do and help us be more creative, really elevate the human aspects of all of us. So let's carry that through. So you mentioned the Industrial Revolution. Machines have always replaced humans in certain tasks. There's always been substitution. Always, and, but for the first time, it's happening with cognitive tasks. So people get scared. And then you quote the statistics, median income in the United States has dropped since the late 90s from 55,000 down to 50,000. Part of that is, and you can see it at, you know, there aren't paper hangers on billboards anymore, barely there are. Or you go to the airports and kiosks that replace ticket, you know, issuers. Hopefully they can place it here at least until tomorrow at the airport. And so people are concerned, as you rightly pointed out. But you also said, we have the opportunity to shape this. So the answer many of us feel is education around creativity, how to combine different inputs to create value. But many people are afraid and say, oh, let's stop progress. That's not going to happen, you know that. So what has to happen from a socio-economic, a public policy standpoint in order to create those borders that you talked about? Right, right, right. I think education itself, you know, has to fundamentally change where we infuse more creativity into the education system. If we start, till now it's been more focused, very focused on the science or math aspect which is where you go computer scientists. But you need that, you know, the human aspect built out in all of us, right? And so, but it's also an opportunity for us to leverage AI to make our education better. So more personalized education. But if you, from a social aspect, I think one of the things that's missing is really the policy aspect. We don't know, this technology is coming at us so fast, we don't have all the policies figured out. We are building out the policies as the technology evolves. And that is kind of causing that fear or friction, so to speak. So I think there needs to be this group or the governments actually need to take more ownership and start putting in those guardrails into place from a policy perspective. And that needs to come from the industries themselves, right? There needs to be these thought leaders. I think everybody who is scared of AI should be starting to take an active role to understand it and drive this policy forward. Well, it has to be bipartisan, too. Which right now doesn't look too... Whatever the partisan is, because in other ways it's not just bipartisan like it is in the U.S., but coming back to this question, I got a couple of quick questions for you. One is that you mentioned earlier that the computer scientist probably should not be the one that's necessarily making the decision about a legal issue. It suggests that there's going to be a renaissance of cross-disciplinary skills required within, certainly within computing. Absolutely. So for example, the people that are best at describing how human interactions evolve and maintain might be philosophers, which gets turned into law, talk a little bit about the kind of the renaissance of the whole promise of cross-discipline thinking in computing because we're attacking new kinds of problems that just aren't algorithmic. Exactly. And you need to have deep domain experts deeply involved in building out these AI products. Which is kind of a gap today. So I think you're absolutely right. So second thing is related to that is we've done some research and we're in the midst right now of a pretty sizable project on envisioning what we call, or the needs and how it will be structured, we call systems of agency. So we, you know, you observe the collection of the data that turned the data into value through big data and then to have a consequential action in the real world, we think that there are three different ways that's going to happen. I'm going to bore you right now. But really we're asking these systems to do something on behalf of the brand and increasingly do something in a complex human centered environment. What does, and so effectively be agents for the brand. We know how to distribute authority, or I'm sorry, we know how to distribute data, we know how to distribute processing. How do we think about distributing authority? Using AI. Is that something that people are starting to think about in your estimation, as we think about the people problems associated with this? I think so. I think people are beginning to think about it. There's a lot of investment going on, not only in the technology development part, but also the human side of things. It just doesn't get as much publicity as the technology piece does, right? A robot beating somebody at Go is much more newsworthy. You have huge moral implications for something else. So I've got one more question. Well, by the way, in a narrow sense, would fraud detection be an example of distributing authority? No, because, well, I'll ask you. It's fraud detection and example of distributing authority. It's narrow, but it's somebody, it's a machine making a decision not to fulfill a transaction. Right, but the machine is not making a decision to bring an indictment against someone and really doing fraud. So all the machine's doing is seeing a pattern that might indicate a problem and taking a prophylactic step to avoid it. They, the machine is not declaring fraud. No, and there are two things to it, right? The machine, before it declares fraud, it's been trained, it's been built by a human, it's been trained by human, right? Before it declares, before it goes into production and declares fraud, there has been a lot of training done by human where they're saying, yes, no, this is right, this is wrong, to say, so the training is crucial that comes from humans and also once it's in production, there's a human in the loop who's watching it. Who still has agency rights. Exactly, so the human is still there. So I have one more question, I have one more question. And that other question is, at least in the US, because AI is software, at least in the US, the most software is covered under copyright law, which means what software does is a speech act, which has implications for whether or not you can go after a company because their software did something wrong. AI, as an agent, can't be a speech act. There's got to be some other remediation. We have to expect more from brands that deploy this. How is that going to evolve in your estimation? I think the policy part, that's where it becomes more important, right? And if you, recently there was a news of a robot being given citizenship, I mean, besides the marketing and hype, what does that entail? It's making us question fundamental things and the policy aspect has to cover a lot of new scenarios which we've just haven't had to think about in our whole life, right? It's just arising a lot of new scenarios that are going to make us create new policies around it. So, I mean, this is a very interesting discussion. And when I hear it, I think about what can humans do that machines can't do? And you go back, you know, it wasn't long ago, machines couldn't climb stairs. Yeah, yeah, they can do, yes. Gymnastics. Yeah, right. So, I don't know, do you think in those terms, and I mean, there's empathy, there's maybe negotiation, there's maybe things like decisions on a jury that require humans. Oh, yeah, I'll give you the simplest one, what it cannot do even today. It can write music, which you've probably seen. But, yeah, I still can't tell a joke. Can't write a joke, because it doesn't know, it doesn't understand sarcasm. And it doesn't really have that human aspect of connecting with people and taking conversations forward. Like, just talking to you, I have something called an intuition or perception, which helps me guide this conversation. A machine can't do that, it's just black and white, it goes by data. It's trained, yes. It's trained. Responsive. Yes. I always struggle with the term artificial intelligence. I feel like machine intelligence is more accurate. I struggle with the artificial, I struggle with the intelligence. Yes, it's how you define intelligence. All right, Bita, we have to leave it there. Last word on, let's bring it back to Discover 2018. Yes. Tie it into your future vision. Oh, yes, I am so excited to be here. And I don't know if you've had a chance to walk through the floors, but we're doing some amazing things with AI, with big data. And really looking forward to helping our customers start and execute on their AI journey. B. Naminah, thanks very much for coming in theCUBE. It's great to meet you. Thank you. All right, keep it right there, everybody. We'll be back with our next guest, Dave Vellante for Peter Burris Live from HPE Discover, Madrid, 2018. You're watching theCUBE.