 Hey everyone, welcome back to the Cube's live coverage from Los Angeles of KubeCon and CloudNativeCon 2021. Lisa Martin with Dave Nicholson. Dave and I are pleased to welcome our next guest remotely, Stephen Huells joins us, the Senior Director of Cloud Services at Red Hat. Stephen, welcome to the program. Thanks Lisa, good to be here with you Dave. Talk to me about where you're seeing traction from an AI ML perspective, like where are you seeing that traction, what are you seeing in the market? It's a great starter question here, AI ML is really being employed everywhere, right, regardless of industry. So financial services, telco, governments, manufacturing, retail, everyone at this point is finding a use for AI ML. They're looking for ways to better take advantage of the data that they've been collecting for these years. It really, it wasn't all that long ago when we were talking to customers about Kubernetes and containers. AI ML really wasn't a core topic where they were looking to use a Kubernetes platform to address those types of workloads. But in the last couple of years, that's really skyrocketed. We're seeing a lot of interest from existing customers that are using Red Hat Tip and Shift, which is a Kubernetes based platform, to take those AI ML workloads and take them from what they've been doing traditionally for experimentation and really get them into production and start getting value out of them. Is there a common theme you mentioned a number of different vertical telco healthcare or financial services? Is there a common theme that you're seeing among these organizations across verticals? There is. I mean, everyone has their own like the type of technique that they're going to get the most value out of. But the common theme is really that everyone seems to have a really good handle on experimentation. They have a lot of very great data scientists, model developers that are able to take their data and get the value out of it. But where they're all looking to get our help or looking for help is to put those models into production. So ML Ops, right? So how do I take what's been built on somebody's machine and put that into production in a repeatable way? And then once it's in production, how do I monitor it? What am I looking for as triggers to indicate that I need to retrain? And how do I iterate on this sequentially and rapidly applying what would really be traditional DevOps, software development, lifecycle methodologies to ML and AI models? So Steve, we're joining you from KubeCon live at the moment. What's the connection with Kubernetes and how does Kubernetes enable machine learning, artificial intelligence? How does it enable it and what are some of the special considerations to keep in mind? So the immediate connection for Red Hat is Red Hat's open shift is basically an enterprise grade Kubernetes. And so the connection there is really how we're working with customers and how customers in general are looking to take advantage of all the benefits that you can get from a Kubernetes platform that they've been applying to their traditional software development over the years, right? The agility, the ability to scale up on the man, the ability to have shared resources to make specialized hardware available to the individual communities. And they want to start applying those foundational elements to their AI ML practices. A lot of data science work traditionally was done with high powered monolithic machines and systems. They weren't necessarily shared across development communities. So connecting something that was built by a data scientist to something that then a software developer was going to put in production was challenging. There wasn't a lot of repeatability in there. There wasn't a lot of scalability. There wasn't a lot of auditability. And these are all things that we know we need. We're talking about analytics and AI ML. There's a lot of scrutiny put on the auditability of what you put into production, something that's making decisions that impact whether or not somebody gets a loan or whether or not somebody's granted access to systems or decisions that are made. And so the connection there is really around taking advantage of what has proven itself in Kubernetes to be a very effective development model and applying that to AI ML and getting the benefits in being able to put these things into production. So Red Hat has been involved in enterprises for a long time. Are you seeing most of this from a Kubernetes perspective being net new application environments or are these extensions of what we would call legacy or traditional environments? They tend to be net new. I guess it's sort of it's transitioned a little bit over time. When we first started talking to customers, there was desire to try to do all of this in a single Kubernetes cluster. How can I take the same environment that had been doing our software development, beef it up a little bit and have it apply to our data science environment. And over time, like Kubernetes advanced, so now you can actually add labels to different nodes and target workloads based on specialized machinery and hardware accelerators. And so that has shifted now toward coming up with specialized data science environments. But still connecting the clusters and that's something that's being built on that data science environment is essentially being deployed then through a model pipeline into a software artifact that then makes its way into an application that goes live. And really, I think that that's sensible, because we're constantly seeing a lot of evolution in the types of accelerators, the types of frameworks, the types of libraries that are being made available to data scientists. And so you want the ability to extend your data science cluster to take advantage of those things and to give data scientists access to those specialized environments so they can try things out, determine if there's a better way to do what they're doing. And then when they find out there is be able to rapidly roll that into your production environment. You mentioned the word acceleration and that's one of the words that we talk about when we talk about 2020 and even 2021, the acceleration in digital transformation that was necessary really a year and a half ago for companies to survive and now to be able to pivot and thrive. What are you seeing in terms of customers' appetites for adopting AI ML based solutions? Has it accelerated as the pandemic has accelerated digital transformation? It's definitely accelerated. And I think the pandemic probably put more of a focus for businesses on where can they start to drive more value? How can they start to do more with less? And when you look at systems that are used for customer interactions, whether they're deflecting customer cases or providing next best action type recommendations, AI ML fits the bill there perfectly. So when they were looking to optimize, hey, where do we put our spend? What can help us accelerate and grow even in this virtual world we're living in? AI ML really floated to the top there. That's definitely a theme that we've seen. Is there a customer example that you think that you could mention that really articulates the value of that? I think a lot of it, we've published one specifically around HCA healthcare and this had started actually before the pandemic. But I think it's applicable because of the nature of what a pandemic is, where HCA was using AI ML to essentially accelerate diagnosis of sepsis, right? They were using it for disease diagnoses. That same type of diagnosis was being applied to looking at COVID cases as well. And so there was one that we did in Canada with, it's called How's Your Flattening, which was basically being able to track and do some predictions around COVID cases in the Canadian provinces. And so that one's particularly, I guess, kind of close to home given the nature of the pandemic. But even within Red Hat, we started applying a lot more attention to how we could help with customer support cases, right? Knowing that if folks were going to be out with any type of illness, we needed to be able to be able to handle that case workload without negatively impacting work-life balance for other associates. So we looked at how can we apply AI ML to help maintain and increase the quality of customer service we were providing. That's a great use case. And did you have a keynote or a session here at KubeCon? I did. I did. And it really focused specifically on that whole ML ops and bottle ops pipeline. It was called Evolving with Kubernetes and Bracing Bottle Ops. It was for Kubernetes AI Day. I believe it aired on Wednesday of this week. Yes, it did. Tuesday made. It all kind of condenses in the virtual world. Doesn't it? It does. I guess one of the questions that Lisa and I have for folks, where we sit here, I don't know, was it year seven or so of the dawn of Kubernetes, if I have that right? Where do you think we are in this wave of adoption? Coming from a Red Hat perspective, you have insight into what's been going on in enterprises for the last 20 plus years. Where are we in this wave? That's a great question. Every time like you, it's sort of that cresting wave sort of analogy, right? That when you get to top one wave, you notice the next wave behind it's even bigger. I think we certainly got to the point where organizations have accepted that Kubernetes is applicable across all the workloads that they're looking to put in production. Now, the focus has shifted on optimizing those workloads, right? What are things that we need to run in our in-house data centers? What are things that we need or can benefit from using commodity hardware from one of the hyperscalers? How do we connect those environments and more effectively target workloads? If I look at where things are going to the future, right now we see a lot of things being targeted based on cluster, right? We say, hey, we have a data science cluster. It has characteristics of X, Y, and Z, and we put all of our data science workloads into that cluster. In the future, I think we want to see more workload specific type of categorization of workloads so that we're able to match available hardware with workloads rather than targeting a workload at a specific cluster. A developer or data scientist can say, hey, my particular algorithm here needs access to GPU acceleration and the following frameworks. And then the Kubernetes scheduler is able to determine of the available environments, what's the capacity, what are the available resources, and match it up accordingly. So we get into a more dynamic environment where the developers and those that are actually building on top of these platforms actually have to know less and less about the clusters they're running on and just have to know what types of resources they need access to. So sort of democratizing that. Steve, thank you for joining Dave and me on the program today talking about the traction that you're seeing with AI, ML, Kubernetes as an enabler. We appreciate your time. Thank you. Thanks, Steve. For Dave Nicholson, I'm Lisa Martin. You're watching theCUBE Live from Los Angeles, KubeCon and CloudNativeCon 21. We'll be right back with our next guest. The technology industry for over 12 years now. So I've had the opportunity as a marketer to really understand and interact with customers across the entire buyer's journey. Hi, I'm Lisa Martin and I'm a host of theCUBE. Being a host on theCUBE has been a dream of mine for the last few years. I had the opportunity to meet Jeff and Dave and John at EMC World a few years ago and got the courage up to say, Hey, I'm really interested in this. I love talking with customers.