 TheCube presents HPE Discover 2022, brought to you by HPE. Greetings from Las Vegas, everyone. Lisa Martin here with Dave Vellante. We're live at HPE Discover 2022, with about 8,000 folks here at the Sands Expo Convention Center. First HPE Discover in three years, everyone jammed in that keynote room, it was standing here only. Dave and I have a couple of exciting guests, proud to introduce you to. Please welcome back to theCUBE, John Schultz, the EVP and General Counsel of HPE. Great to have you back here. And Kate Berth, Butterfield, the head of AI and machine learning at the World Economic Forum. Kate, thank you so much for joining us. Thank you, it's an absolute pleasure. Isn't it great to be back in person? Fantastic. John, we were saying that last time you were on theCUBE, it was theCUBE virtual. Now here we are back. A lot of news this morning, a lot's going on. The Edge to Cloud conference is the theme this year. In today's Edge to Cloud world, so much data being generated at the edge. It's just going to keep proliferating. AI plays a key role in helping to synthesize that, analyze large volumes of data. Can you start by talking about the differences of the two, the synergies, what you see? Yeah, absolutely. And again, it is great to be back with the two of you and great to be with Kay, who is a leading light in the world of AI and in particular AI responsibility. And so we're going to talk a little bit about that. But really this synergistic effect between data and AI is as tight as they come. Really data is just the raw materials by which we drive actionable insight. And at the end of the day, it's really about insights and that speed to insight to make the difference. AI is really what is powering our ability to take vast amounts of data. Amounts of data that we've never conceived of being able to process before and bring it together into actionable insights. And in simplest form, right, AI is simply making computers do what humans used to do. But the power of computing, what you heard about frontier on the main stage today, allows us to use technology to solve problems so complex that it would take humans millions of years to do it. So this relationship between data and AI, it's incredibly tight. You need the right raw materials. You need the right engine that is the AI. And then you will generate insights that could really change the world. It's okay, there's a data point from the World Economic Forum which really caught my attention. It says 15.7 billion of GDP growth is going to be a result of AI by 2030. 15.7 billion added. That includes the dilutive effects where we're replacing humans with machines. What is driving this in this incremental growth? Well, I think obviously it's the access to the huge amounts of data that John pointed out. But one of the things that we have to remember about AI is that actually AI is pretty dumb unless you give it nice, clean, organized data. And so it's not just all data, but it's data that has been through a process that enables the AI to gain insights from it. And so what is it? It's the compute power, the ever-increasing compute power. So in the past, we would never have thought that we could use some of the new things that we're seeing in machine learning. So even deep learning. You know, it's only been about for a small length of time but it's really with the compute power, with the amount of data, being able to put AI on steroids for want of a better analogy. And I think it's also that we are now in business and society being able to see some of the benefits that can be generated from AI. And, you know, listening to Oakridge talk about the medical science advances that we can create for human beings, that's extraordinary. But we're also seeing that across business. That's why I was gonna add, as impressive as those economic figures are in terms of what value it could add from a pure financial perspective, it's really the problems that could be solved. You know, if you think about some of the things that happened in the pandemic and what virtual experience allowed with a phone or with a tablet to check in with a doctor who was gonna curate your COVID test, right? When they invented the iPhone, nobody thought that was going to be the use. AI has that same promise, but really on a macro global scale, some of the biggest problems we're trying to solve. So, huge opportunity, but as we're gonna talk about a little later, huge risk for it to be misused if it's not guided and aimed in the right direction. Absolutely. How's it okay? Can you talk about that? Well, I was just going to come back about some of the benefits, you knew. California's been over the last 10 years trying to reduce emissions. One wildfire absolutely wiped out all that good work over 10 years. But with AI, we've been developing an application that allows us to say, tomorrow at this location, you will have a wildfire, so please send your services to that location. That's the power of artificial intelligence to really help with things like climate change. And that's a, is that a probability model that's running somewhere? Yeah, absolutely. So I wanted to ask you, a lot of AI today is modeling that's done and the edge, you mentioned the iPhone, with all this power and new processors, AI inferencing at the edge in real time, making real time decisions. So one example is predicting, the other is actually something going on in this place. What do you see there? Yeah, so I mean, yes, we are using a predictive tool to ingest the data on weather and all these other factors in order to say, please put your services here tomorrow at this time. But maybe you want to talk about the next edge. Yeah, yeah. Well, and I think it's not just grabbing the data to do some predictive modeling, it's now creating that end-to-end value chain where the actions are being taken in real time based on the information that's being processed, especially out at the edge. So you're ending up, not just with predictive modeling, but it's actually transferring into actual action on the ground that's happening, we like to say auto-magically. So to the point where you can be making real time changes based on information that continues to make you smarter and smarter. So it's not just a group of people taking the inputs out of a model and figuring out, okay, now what am I going to do with it? The system end-to-end allows it to happen in a way that drives a time to value that is beyond anything we've seen in the past. In every industry. In every industry. Absolutely, and that's something we learned during the pandemic, one of the many things. Access to real time data to actually glean those insights that can be acted on is no longer a nice to have for companies in any industry. You've got to have that now. They've got to use it as their competitive advantage. Where do you see when you're talking with customers, John, where are they in that capability and leveraging AI on steroids, as I said? I think it varies. I mean, certainly I think as you look in the medical field, et cetera, I mean, I think they've been very comfortable and that continues to up. The use cases are so numerous there that in some ways we've only scratched the surface, I think, but there's a high degree of acceptance and people see the promise. Manufacturing is another area where automation and relying on some form of what used to be kind of, analog intelligence that people are very comfortable with. I would say candidly, I would say the public sector and government is the furthest behind it. It may be used for intelligence purposes and things like that, but in terms of advancing overall, the common good, I think we're trailing behind there. So that's why things like the partnership with Oak Ridge National Laboratory and some of the other things we're seeing. That's why organizations like the World Economic Forum are so important because we've got to make sure that this isn't just a public sector, a private sector piece. It's not just about commercialization and finding that next cost savings. It really should be about how do you solve the world's biggest problems and do it in a way that's smarter than we've ever been able to do it before. It's interesting you say public sectors is behind because in some respects they're really advanced but they're not sharing that because it's secretive. That's very fair. So Kay, the other interesting stat was that by 2023, this is like next year, 6.8 trillion will be spent on digital transformation. So there's this intersection of data. I mean, to me, digital is data, but a lot of it was sort of, we always talk about the acceleration because of the pandemic. If you weren't a digital business, you were out of business and people sort of rushed, I call it the forced march to digital. Now are people stepping back and saying, okay, what can we actually do? And maybe being more planful, maybe you could talk about sort of that roadmap. Sure, I think that that's true. And once I agree with John, we also see a lot of small, a lot of companies that are really only at proof of value for AI at the moment. So we need to ensure that everybody, we take everybody, not just the governments, but everybody with us. And one of the things I'm often asked is, if you're a small or medium-sized enterprise, how can you begin to use AI at scale? And I think that's one of the exciting things about building a platform and enabling people to use that. I think that there is also the fact that we need to take everybody with us on this adventure because AI is so important and it's not just important in the way it's currently being used, but if we think about these new frontier technologies like Metaverse, for example, what's the Metaverse except an application of AI? But if we don't take everybody on the journey now, then when we are using applications in the Metaverse or building applications in the Metaverse, what happens at that point? Think about if only certain groups of people or certain companies had access to Wi-Fi or had access to cellular or had access to a phone, right? The advantage and the inequality would be manifest, right? We have to think of AI and supercomputing in the same way because they are going to be these raw ingredients that are going to drive the future. And if they are not, if there isn't some level of AI equality, I think the potential negative consequences of that are incredibly high, especially in the developing world. Talk about it from a responsibility perspective. Getting everybody on board is challenging from a cultural standpoint, but organizations have to do it as you both articulated. But then every time we talk about AI, we've got to talk about its use responsibly. Kay, what are your thoughts there? What are you seeing out in the field? Yeah, absolutely. And I started working in this in about 2014 when there were maybe a handful of us. What's exciting for me is that now you hear it on people's lips much more, but we've still got a long way to go. We've still got that understanding to happen in companies that although you might, for example, be a drug discovery company, you are probably using AI not just in drug discovery, but in a number of backroom operations such as human resources, for example. We know the use of AI in human resources is very problematic and is about to be legislated against or at least be set up as a high risk problem use of AI by the EU. So across the EU, we know what happened with GDPR, that it became something that lots and lots of countries used and we expect the AI act to also become used in that way. So what you need is you need not only for companies to understand that they are gradually becoming AI companies, but also that as part of that transformation, it's taking your workers with you. It's helping them understand that AI won't actually take their jobs. It will merely help them with re-skilling or working better in what they do. And I think it's also in actually helping the board to understand. We know lots of boards that don't have any clue about AI and then the whole of the C-suite and the trickle down and understanding that at the end it's you've got tools, you've got data and you've got people and they all need to be working together to create that functional responsible AI layer. When we think about it, really, when we think about responsible AI, really think about at least three pillars. The first off is that privacy aspect. It's really that data ingestion part which is respecting the privacy of the individuals and making sure that you're collecting only the data you should be collecting to feed into your AI mechanism, right? The second is that inclusivity and equality aspect. We've got to make sure that the actions that are coming out, the insights we're generating, driving, really are inclusive. And that goes back to the right data sets. It goes back to the integrity in the algorithm. And then you need to make sure that your AI is both human and humane. We have to make sure we don't take that human factor out and lose that connection to what really creates our shared humanity. Some of that's transparency, et cetera. I think all of those sound great. We've had some really interesting discussions about in practice, how challenging that's going to be given the sophistication of this technology. When you say transparency, you're talking about the machine made a decision. I have to see how, understand how the machine made a decision. Algorithmic transparency, go ahead. Yeah, algorithmic transparency. And the United States is actually at the moment considering something which is called the Algorithmic Accountability Act. And so there is a movement to particularly where somebody's livelihood is affected. So for example, whether you get a job. And it was the algorithm that did the pre-selection in the human resources area. So did you get a job? No, you didn't get that job. Why didn't you get that job? Why did the algorithm? A mortgage would be another thing. And John was talking about the data and the way that the algorithms are created. And I think one great example is lots of algorithms are currently created by young men under 20. They are not necessarily representative of your target audience for that algorithm. And so unless you create some diversity around that group of developers, you're going to create a product that's less than optimal. So responsible AI isn't just about being responsible and having a social conscience and doing things but in a human-centered way. It's also about the bottom line as well. It took us a long time to recognize the shared interests we have in climate change and the fact that the things that are happening in one part of the world can't be divorced from the impact across the globe. When you think about AI and the ability to create algorithms and engage in insights that could happen in one part of the world and then be transferred out, notwithstanding the fact that most other countries have said we wouldn't do it this way or we would require accountability, you can see the risk. It's what we call the race to the bottom. If you think about some of the things that have happened over the time in the industrial world, often businesses flock to those places with the least amount of safeguards that allow them to go the fastest regardless of the collateral damage. I think we feel that same risk exists today with AI. So much more we could talk about, guys. Unfortunately, we're out of time but it's so amazing to hear where we are with AI, where companies need to be and it's the tip of the iceberg. Very exciting. Kay and John, thank you so much for joining Dave and me. Thank you. Thank you. Pleasure. We want to thank you for watching this segment. Lisa Martin with Dave Vellante for our guests. We are live at HPE Discoverer 22. We'll be back with our next guest in just a minute.