 beautiful nerds and welcome back to the second city. We're here in Chicago at KubeCon, CloudNativeCon, and I am, Savannah Peterson, delighted to be joined by my co-host, Rob. Rob, how you feeling? Interferee in the day. I'm pumped, I'm stoked. This is great, this is great. This is great. Well stated, my man. Thank you. Well stated. I'm full of words though. We are just, we are getting the engines ready. Off to a fine start. I am delightfully surrounded by three rad dads today on the stage, which isn't always something I get to say and I'm super excited. Michael and Ian, thank you so much for joining us. We've got Red Hat and Dell on the stage. We've got cats and dogs and dinosaurs. Ian, what's the connection between all of this? Let's get the audience up to speed. I mean, so Red Hat and Dell, right? We've actually been partnering for a quarter century. I was going to say real long time. That's, it's a real long time. I learned that some of the people are kind of live off. Back in when Linux was just a little emote in Larry Ellison's eye, or something like that. How romantic. That's the most romantic I've ever heard Linux spoken of. Maybe the most romantic you've ever heard Larry Ellison spoken of. Also true, also true. You know, but as it's, the world has moved on, of course, Linux has made its place in the marketplace and Kubernetes and OpenShift is really powering the next generation of application deployments that we're seeing out there. And, you know, it made sense for Red Hat to partner with someone who can handle the hardware, the infrastructure layer, for that to create a true integrated experience for our customers, you know, and who better than Dell with whom we have that existing relationship. Yeah, I think it's been exciting because actually we helped launch Dell Apex Cloud Platform for Red Hat OpenShift a couple of weeks back where it's simplifying how you bring all of this together and, you know, get up and running and building containers on bare metal within seconds. And that's really awesome. It's a bold claim and exciting. I know, but it's getting up in, as we heard earlier today, you know, it's not always the easiest thing to get up and running with Kubernetes. It's a lot of moving parts. So it's getting all of that together. It takes time. And so what we've been able to do is working with Red Hat to design a process where we can automate so much of that for you so that we can get everything up and running, get you into an environment where you can start deploying applications. But then not just the deployment side of it, the ongoing maintenance of the application or the cluster itself, where we actually take the approach instead of making you use separate dedicated tooling, we project all of the infrastructure management directly into the OpenShift web console. So the same tool that you're using to manage the cluster and the applications on the cluster, you're now using that same tooling and processes to manage the infrastructure underneath it. You know, that ties into something, our last conversation here, 76% of developers saying they have too much cognitive load, getting to use this and not surprising, I don't think to any of us here, especially in this particular space, but having the ability to use the same tools or to have that same user experience at every layer really makes a difference. Is that one of the things that has you most excited this week? Oh, there's so many things about this week I'm excited about. Tell us more. Tell us some of those things. So one of the things that we just announced was our new Dell Validated design for the OpenShift AI on top of the Apex style platform for Red Hat OpenShift. And we have a sample application which is actually doing generative AI on top of this platform and showing customers how they can get up and running with these types of capabilities without having to go through the process of building and training their own model and all of the work that's involved in doing that. Being able to pull in open source projects, assemble them together in a way that makes sense and lifecycle all of that together in a single flow. Yeah, I think that's the exciting piece is it's bringing things like we just had Red Hat on a few minutes ago talking about backstage, for instance, how you have to develop a portal and how that really brings that to these types of solutions where people can get up and running fast, again, Dell handling the GPUs and all the CPUs and all of that stuff in there. It seems like that would be a great jumping off point for something like what we saw with the demo this morning that Priyanka was doing that kind of went a little sideways on the laptop, but you don't want it to go sideways. So bringing that into that hardware, there seemed to be a message that, hey, Kubernetes to the hardware is not an easy thing. It would seem like this is an easy way to get to that. And it makes it a lot more open, right? So you don't have the heavy lift of creating all this stuff yourself which makes AI a lot more accessible to organizations that don't have teams of data scientists or all of this compute power. It enables it on hardware that can easily run on premises, but then you can take those models and you can run those models anywhere, right? So you're not tied into a specific hyperscalers implementation of an AI service, right? So you've done this, you can train this on-prem where your data lives. So you don't have to move any of this proprietary data somewhere else, train it exactly where it lives, take that model, run it on-prem, run it at the edge, run it in a public cloud, you're not tied down to anything. It's getting these projects out of silos, really, and allowing people to collaborate in a way that wasn't possible before. Getting them out of silos and providing the tools to manage the full life cycle, if you will, of all the various artifacts. So an AI workload, just like any other workload, it needs, it's got a development, or in this case, maybe model development stage, but then it's got to be tested. It's got to be kept up to date. It's got to be versioned. At some point, it's got to be retired and replaced by the next thing. It's got to be distributed. OpenShift does that already for applications. So let's take the power that it has and apply it to this new space of artificial intelligence. How big of a shift is it to be able to accommodate that? I mean, we've got a lot of processing power. There's a lot of things going on if we're dealing with an LLM or training a big model. I think it depends on what layer you look at it. It's like, it's like any abstraction. Abstractions are great right up until they're not. Yeah. So it's at the high level, they're the same steps, but you tend to use different tools along the path. Yeah, and I think you have a little dinosaur says platform engineering on there. We'd be remiss if we didn't go down that path of the dinosaur here and kind of say, you know, this is a way to make platform engineering, which is just the new word for IT for the most part. Make it a lot easier. So how are you guys doing that and how do you look at platform engineering? Well, the whole concept behind the dinosaur is platform engineering is typically looked at as a little archaic and possibly in danger of extinction. And if the cats and dogs, the developers and the operations can get together, then platform engineering should be embracing that as well. I think by helping to enable these teams to work inside the same tools, to speak the same language, to collaborate better together, it makes that a lot easier. Yeah, I think that's the key is that platform engineering kind of is the glue that keeps everything going and it is that connective tissue back from the hardware into the container world, into the, you know, from the bare metal on up kind of thing to bring that. And are you seeing that customers are really looking just an easier way to get there? I mean, again, I think there's, again, once again, about 50% new users, new to Kubernetes coming to this show this year again. Are you seeing that as they come in, they want the easy button? Is that? Yes, absolutely. I mean, who doesn't want an easy button? I want an easy button. Everybody wants an easy button. I mean, you know you're just a masochist, I'm not exactly sure what's going on there. And we got to hear from Michael. What has you most excited this week? It's me most excited this week. I'm just, I'm truly excited about the potential that the Apex Cloud Platform has to accelerate, you know, and kind of democratize the access to Kubernetes slash OpenShift for often for smaller organizations for whom, you know, a multi-week services engagement potentially simply to set up a platform is not something that they can easily bite off on. If we can deploy that now in a matter of hours rather than days or weeks, that's a tremendous accelerator for them and it's going to open, you know, this whole new world that they really couldn't get into on-premises before. I think it also helps to bring the AI to the data as well. Because still, I think it's, depending on whose numbers you look at, it's between 50 and 80% of data is still on-prem that people are using for AI. I'm not saying all the AI, all the data in the world, but when you start to look at the usable data and where it is, is that a big piece of this as well because you go where the data is? Yep, data gravity, regulatory considerations in some cases, and then of course, economic drivers, all are things that are driving people to look at on-premises solutions. Yep. I mean, you see a lot of organizations, or a lot of countries that have started implementing laws about where data about their citizens has to reside. The hyperscalers do a great job of building massive data centers, but they can't just stand up a new region in every sovereign country, right? It's not feasible. So having that ability to bring those types of cloud services to where the data needs to live, a lot of customers have gone from on-prem to cloud, and a lot of customers have made the switch back to on-prem, and I think the reality for most customers is a hybrid in between, right? There are very few customers are going to be completely on-prem or completely in the cloud. Everybody, the vast majority, are going to be somewhere in that spectrum in between because that's the reality of business today, right? You have to be able to make things accessible to your users. You need to store data where your regulations and your laws say that has to live. So it's making that hybrid easier to achieve. Yeah, and I think what's neat, just to give you another little useless stat you can use some other time, five countries have actually outlawed Google Analytics for this exact reason, because that you can't have it. It's France, Netherlands, the Swedish... I'm rooting for you. I'll believe you no matter what you say right now. It's back, check it in the description, it's fine. Italy or something like that. But when you look at it, I think... Largely Western European nations. Yes, Western European nations have gone, and five of them at least in the EU have gone and outlawed Google Analytics. And I think to your point about multi-cloud and hybrid cloud, I think again, OpenShift is in the cloud already, many of the hyperscalers. And as part of this APEX, I know there's a unified storage layer as well that helps you with that data. So if you're going to move up to go and say burst and do big training things, there's an easy way to actually move that data for the data gravity aspect. There is, and if you've followed the industry and followed Red Hat, we've been talking about Open Hybrid Cloud for at least a decade, you know? And it's really cool to see it coming to fruition, kind of. Well, I do believe AI is definitely a catalyst there in making that happen. Yeah, that's a great point. Absolutely. It is fun to see. There's an interesting debate going on both of us internally. I think we're going to have Kelsey Hightower on tomorrow to actually debate this into your conversation. As computers, as chips are getting more powerful, are people going to be doing more on-prem and locally versus in the cloud because the machines can handle more? So it's powerful is one aspect of it, but the other is more efficient, right? So we're seeing the wattage required for CPU cycle. So a lot of those efficiencies are going to make that more accessible. So you don't need racks and racks and racks of servers anymore. You can use a much smaller data center footprint, more efficient power, less cooling. So a lot of those things make that type of compute a lot more accessible to organizations that don't need to go massive hyper scale. And more sustainable as we think about our future. Yeah, and then just running the workload, latency is often a consideration as well. And you can think of things like autonomous vehicles and stuff like that. So the ability now to run these AI workloads, not just in hyperscalers or data centers, but now pushing them all the way out to the edge and the far edge is going to happen. That's where I was going to go. It's also funny enough, the far edge is not necessarily in a cloud. And it's somebody's prem, couldn't be a field. It could be a power plant. All of these different things. Is that where you're also seeing this as being really applicable out to those so that we have smaller deployments out there and smaller form factors that are powerful still. To do inference at the edge, for instance. Yeah, and providing the common application platform that scales from the hyperscaler to the on-premise data center, to the branch office, to the vehicle, to the light pole potentially. Yeah, and I think that's it, is that it's everywhere. And I think as people are building these apps as microservices in containers, they want to take them where the data is. And especially, I think I was talking to a rail of a company that does, happens to be a Dell customer, by the way, that is using, taking photos of trains as they go by at upwards of 125 miles per hour. And they can't move all that data back because they actually are looking at the trains for defects as they're going speeding by. So it's a really an edge use case that's right there at the track where you have 8K video, a lot of 8K video. Well, you don't want to pull all that 8K video back across the wire. You want to do the inference at the edge there. Yep, yep. You're not going to send a Wii transfer with all that footage. It's like a Rubik's Cube. You got to get the colors all right. Why are you talking about Rubik's Cube? I don't know. I'm shocked. We're going to be the inspiration of that here on theCUBE at KubeCon. I'm trying to just get it as Kube as possible. I got Kube's tattooed on my body, folks. There's no one more Kube than me. I'm living the brand. Although these guys, I mean, I got to ask, are you always this well-branded? You guys are crushing it. I mean, absolutely smashing it. I'm a marketer by trade and run a marketing agency. And I love, when I see a team walk out like this, so proud. You really just make the marketing part in me, just beaming. It can be an afterthought sometimes. And of course, we're going to remember the cats and dogs and now the dinosaur. Do you guys have a favorite dinosaur? Oh my goodness. My four-year-old daughter was a Triceratops for Halloween. We've introduced her to the land before time movies. They still own my 35-year-old ovaries. So she was a three-horned. Oh my gosh. So you also mentioned, Michael, that your daughter reads to you about Kubernetes. Tell us a little bit more about that. Yes. She's actually teaching you. Last year's KubeCon in Detroit, I picked up a number of the kids' illustrated Kubernetes books and those have become some of her favorite. And just the other night, she said she wanted to read to me Admiral Basch's Island adventure. I hope you obliged. Absolutely. Gladly. Do you have a favorite CNCF character? Oh, I don't know. There's quite a few. The original, Fippy. Yeah, I'm a Fippy fan myself. I feel like Fippy's been the gateway. Fippy's the giraffe for those of you who may not know what we are talking about at this point on the Kube. I know they're all over there. Don't tell John May that. It's 2023 and we're still talking about PHP applications. I just want to point that out. Yeah. Absolutely. What about you, Ian? Last question. Do you have a favorite dinosaur? Do I have a favorite dinosaur? Stegosaurus. Well, like that. Okay, that's great. And actually one more in conclusion for you. What is more scary to you? The misuse of AI for bad or teaching your daughter how to drive? Oh, man. I'm coming out hard. I came to play this KubeCon, ladies and gentlemen, I hope you're all ready. I'm going to go with the driving lessons. You've got to go with the driving lessons. That is white knuckle, you know, all the time. I admire the honesty there. I really, I feel like I can trust you. And I'm hoping she'll never see this video. Yeah. Oh my gosh, that was an amazing response. Ian and Michael, thank you both so much for joining us and sharing about this partnership. I can't wait to talk dinosaurs, children whatever comes next in our AI future. Maybe a little Kubernetes, right? And maybe a little Kubernetes if we feel like in a KubeCon in Paris. I look forward to that. You know we were actually in the Paris of the Prairie they call Chicago, so we'll be there next time. This has been great. I'm now Rambling Rob. Thanks for tolerating me. You're the best. I love it. And most importantly, thank all of you for listening to this scintillating interview here from the Prairie, from Paris of the Prairie and CloudNativeCon. My name is Savannah Peterson and you are watching the Kube. The leading source for emerging tech news.