 Welcome back everyone, live here on the show floor at KubeCon, CNCF's CUBE presentation. It's here for the next three days. I'm John Furrier, your host with Rob Stretchy. Savannah Peterson, getting all the action as CloudNative goes next level, can open source, continues to change the game around innovation, around CloudNative, building and deploying applications and having all that infrastructure underneath. We have Brad Maltz here, who's the senior director of DevOps Portfolio and DevRel at Dell Technologies. Brad, great to see you. Thanks for coming on theCUBE. Good to see you again. So DevRel obviously is the front and center story. DevOps continues to be great. The conversation around platform engineering and AI is at an all-time high. In fact, we even introduced the concept of data engineering, a new discipline that's kind of forking off of platform engineering as data pipelines start to scale to feed the AI. And so at the end of the day, it's about applications and shipping workloads. And this community is constantly innovating. What are you guys seeing? What's your focus this year? Give us a quick update. Yeah, so I think you've actually given two of the topics right there quickly. Platform engineering is one of the buzzwords that seems to be taking hold, and we're definitely playing in that discussion. AI, I mean, you've seen it all over. Dell is ripe and prime to enable the community in the world around AI. So for me, a lot of it has been both the DevOps, DevRel side of it. How do we help bring that platform engineering to life mixed with where is AI right to enable the community and IT and everybody around it? That's really two of the big topics, especially at the conference this year that we're speaking about. You know, Rob and I were talking a bunch of Dell folks over the past couple of months around AI. In fact, Kelsey Hightower just put a tweet out this week that said you can do a lot of these language models on a laptop. Yeah. Okay, and some people are like, oh, a MacBook, but also Dell laptops as well. You're seeing hardware come back where the local host, the local concept of how people code, Rob, is the same. It's been that way forever. You code it here and you push it to the cloud or you push it to a cluster. And with AI, more emphasis is going to be around how do you get those, that coding AI enabled? This is a, it's a hard answer because there's different perspectives. You can look at it from one side saying, okay, I need to have it built into the code. I can look at it from a storage perspective. I can look at it from a cloud perspective. What is the DevRel equation guys? Because this is what everyone's talking about. It's like, does it, what's changed for the better? What goes away with AI? How do you guys see this? This is an important question for developers because they want to be productive. Yes. What changes? Yeah. Well, I think it goes back to your, can you do more with less and smaller footprints? In fact, I was, we saw the MetaGuys, we were down in Austin talking with Dell last week and we saw the MetaGuys and one of them said that, you know, they saw that about 70% of Lama2 was actually being downloaded on prem. It's crazy. Which is, I think it talks to the, how do you get developers to be able to, at a cost? And I think that's it is ROI. Is that what you're seeing and what you're hearing from the community? Yeah, so from an AI perspective, really what it comes down to is, what are the use cases and the outcomes you're trying to drive as a business? And with the proliferation of GenAI and Lama2 and all these open source models that are out there, finally for people to utilize, it's if you're not all in on like OpenAPI and ChatGPT, which obviously is hosted for everybody, a lot of people are saying, I want to bring that into my data center. I want control of these outcomes. I want control closer to my data. And what I think is happening is, you got to look at it, are you working on business level outcomes? Are you looking on IT level outcomes? Are you somewhere in the middle of that? And funny enough from like that DevOps, DevRel side, a lot of people right now are very focused on business level outcomes, which is the right place, because that's where AI, I think it's going to make the initial biggest impact. But being selfish in my world, when we start looking at kind of DevOps outcomes platform engineering, you got to start asking the questions, when does GenAI, when does AIOps, when do these types of outcomes start getting driven into what IT and platform engineering is going to build? The biggest question is how do you get people that have been really good for years at working with server and storage and networking and hypervisors and now Kubernetes, how do you get them understanding where AI can impact their daily life? That's a different question I think than some people are looking at it right now. Where's the progress bar in your community around that piece? Because we were talking on our intro today on the keynote analysis, how Kubernetes moved from, here's how you get it working, to now you're starting to see the next level of questions to answer, which is make it secure, you're running stuff on it, kind of goes boring into the background, but still you're the next level. What are some of the conversations that you guys are seeing with the in the DevRel community at Dell? Are you moved past the how it works to more of what's running on it and how does data fit in there? What are the core platform engineering questions that are being worked on right now in your world? From our world, first of all, when we look at platform engineering, it's really about taking IT and trying to have IT figure out how do they build this platform? How do they make a cloud-like experience with all the guardrails in place for their end users to consume? But that means IT has to almost become like a product organization and they got to look at what they're owning and operating as a true platform as a product. Well, that puts a different twist on it than if you're in full-on firefight support mode as an IT ops person. So when you look at it through that realm, Kubernetes is an enabler, it's a great enabler. We all know we've said this many years now, Kubernetes won, Kubernetes is still not easy. And because of that, there's probably a large percentage of the customer base that's figured out how to use Kubernetes, but they've done that through the sheer power of will and hands-on keyboards. They have not done it through optimizations, through automation, through things that they're still figuring out how do they truly make this easy to consume? Unfortunately, they were able to find the people that do that. Brad, I got to ask you, because you and I talked about this in the past, but never really kind of teed it up in a way that could be kind of compared and contrasted. What's the difference between DevOps and platform engineering? Are they one and the same, are they interchangeable? Is platform engineering just a more mainstream name? Is it a less hardcore? People have been talking about the difference between DevOps and platform engineering. Could you, what's your perspective on this? It's an active debate right now. So I'll give you the Brad slash Dell answer on this. DevOps to us is an operating model. DevOps is not a person, right? It's a cultural shift to allow operations to become more agile, partner up with their end user, in this case, enable them with tools and platforms and whatnot. Well, in that DevOps culture, IT sometimes, if not always, was kind of left behind the scenes. You might have had your pipelining team with your automation team and your CICD team. Well, platform engineering is finally saying, you know what, we got to take all those IT things behind the scenes, pull them together with all those developer tools and developer experience, and deliver an end-to-end complete platform that is engineered, i.e. platform engineering, fully engineered, bottoms up, tops down, and delivered as an actual product. That's the nuance where they're not the same thing, but platform engineering has to embrace the DevOps operating principles, agility, partnership, and all the different things that come from DevOps. And end-to-end, how would you describe what an end-to-end process looks like? Is that core to cloud, edge? Does that include all environments? What is the end-to-end? Multi-cloud, non-term, super-cloud, in your terms, it's this notion that a enterprise-type customer, even commercial, mid-market, they're going to be in multiple public clouds. They're going to have edge. They're going to have core data centers. They're going to have coa, cloud-adjacent-type deployment scenarios. Well, they don't want to operate seven or eight different operating models. So how do you create that layer of abstraction to create the platform across a multi-cloud world? By the way, we're going to have a super-cloud five event in December, first week of December, actually last week of November, during the week of AWS' conference. Microsoft ignites, it's going to happen next week. So you've got two big cloud shows coming up. This will be the discussion. So put a plug-in for cloud five in Palo Alto, we'll be at re-invent as well. But I want to ask you that question, because, okay, by the way, I buy that argument, end-to-end, hear that in AI too. Got to be end-to-end, not just the model, go end-to-end. But there's a skills gap challenge in IT as you start to bring this together. How would you describe or talk about the skills needed to do this? And is there a skills gap in this new IT world that's emerging, call it platform engineering, which I think is a good way to describe the new IT, in my opinion, Rob might have his own opinion. I think he would share that. What's your take on the skills gap in platform engineering? So the skills gap is no different than it's been for a while now, except it's getting worse. Right, up until, I want to say 2023, we've talked about automation and Kubernetes skills gaps and observability skills gaps and all that. Now enter AI. The platform folks have to deal with AI. So the reality of it is the skills gap is getting worse. We're not all of a sudden magically finding tens of thousands or hundreds of thousands of skilled workers out there to go help this world. How do we solve that problem? I mean, we as Dell have a few answers. Obviously, last segment I know we chat things about like the Apex cloud platforms giving you where the easy button approach to consuming infrastructure. But I think there's also the DIY enablement approach, right? There's easy button and there's DIYers and we as Dell are going to try to help both. But are we going to find enough people to operate both models? That's going to be a question. Yeah, and I think you hit on something right there with the AI. I think one of the themes out of the keynote this morning was, and two of your partners were up there in Intel and Nvidia, talking about actually making Kubernetes work with the hardware. I mean, this is where it would seem Dell comes in is really how do you make those connections as you make microservice oriented architectures and deployments and containers and you take it to the edge to do inference? It would seem that Dell is kind of positioned pretty nicely right there. And I think that's where we're going to see a lot of success, right? Whether it's from an AI angle or even just a core Kubernetes. How can we help our customer base consume Kubernetes on top of optimized compute and storage and networking, but do it in a way that you can run whatever distribution you want, wherever you want, whenever you want. That's really going to be the end state for us. And then how do we enable it with more intelligence and AI features under the covers? So you need less people to operate all these distributions you want to run. In the super cloud world, you call it multi-cloud, IBM where it goes across cloud. It's the same thing, but you're in it, you've got this abstraction layer of opportunity to help customers have that single pane of glass. It's a very hard problem for all the reasons we all know latency and other things. And so a lot of people aren't really moving there too fast, but it's being developed quickly. What's it going to take in your mind and Dell's perspective on how to get there faster? Is it going to come from more AI workloads? Is it going to come from automation? Because again, AI is a gift to automate things. Generative AI in particular is going to help maybe scale best practices or help with security. Where do you see AI enabling this acceleration to maybe get to a control plane or a management layer or distraction? What's your vision on this AI integration? The reality is right, AI is a tool. So we got to take one step back sometimes because a lot of people build AI up as this magical, mystical thing. Well, guess what? AI only knows as much as you let it know. And if AI does not understand your business processes and your IT processes, it might understand how to run Kube-Cuddle commands, but that does not mean it can apply your specific processes and regulatory compliance-aligned policies into that world. So finding the translation of business-level intelligence into AI models, I think that right there is when we're going to see things really take off more from that platform-engineered enabled world. I mean, I think it's going to be fun to watch. You know, I remember when we were in KubeCon in Amsterdam, you know, a lot of the top tracks were already submitted before chat TPP came out. So it was very interesting to see how there wasn't a lot of sessions on AI, but we certainly dominated the conversation. And one of the things that came out and still comes out now is you don't want hallucinations on the network. You don't want to have any kind of issue with AI. So people have been cautious and they're saying that where you have known data is a good place to start. What do you guys see in your community, in the DevRel side of the community voting with their code, so to speak? Where's the low-hanging fruit that you guys see for jumping in and get going? So I think that, going back to kind of the last point I was making about it, we need the people in the kind of IT ops platform-engineered DevOps world continue writing the automation. Continue taking what you do on a daily, hourly, per minute, per second basis, figure out how to codify that. Until you codify that, no AI ops, AI things going to be able to help you. So my first thing in this, we talk to customers all the time, it's like, have you gone down your infrastructure as code rode yet? They're like, we're doing some of it, not all of it. You should be all in on that now at this point. The second thing is, have you started looking to observability? Because AI is only as intelligent as the data continuously coming into that, which is the telemetry and the logs and all the other stuff. So if you are not figuring out your observability story with your policies being converted into infrastructure as code, AI is never going to help you in the long run. Yeah, we could go down that route on the data and data observability for hours. But again, we talked back in FEB in Amsterdam or I think it was FEB, or is it April? I can't remember. Actually, April. Okay, it was April. It felt like, feels like February, but it was cold. But again, help people understand, I think again, they probably don't think of Dell first when they think DevRel. Why Dell? Why Dell, DevRel? Honestly, it's because anytime that you're thinking about I need to build a platform. Kubernetes is important to me. Well, to make Kubernetes work, to run applications, every application has requirements. CPU, memory, GPU now, disk, performance, security, you name it, you are always going to be reliant on the infrastructure stack no matter what you do. Well, we're there as Dell to help you make sure that when your application defines its requirements, we can satisfy whatever those requirements no matter the UK's, no matter the locality that we want to go after. So from a Dell perspective, we need to tie into the people writing that code that are thinking about how do we define our applications? Should we use Mongo? Should we use Cassandra? What's the impact of one database versus another? One messaging versus another? Does that change my storage footprint? Does it change my CPU needs? It all comes together. And do you want to learn the hard way? Or do you want to have us by your side helping you make it easier? Brad, great to have you on the queue. We got to leave it there. But in the last minute we have left, give a quick plug for what you got going on. developer.dell.com is the site. What's going on in the community? Give a quick commercial and update. So developer.dell.com has been taking off and doing great hands-on labs, blogs, access to all our APIs documentation. Great website, there's a newsletter we're creating also you'll be able to sign up for. If you hear a KubeCon, by the way, stop by the booth. We're giving out some really awesome swag. Hopefully you'll see it later. And honestly, just reach out to us if you ever have any topics you want to talk about, community, Dell, or anywhere in between. Brad, thanks for coming in. DevOps Portfolio and DevRel are our favorite topics here at KubeCon. With the innovations happening, I'm John Furrier, Rob Stetching, Savannah Peterson, Dustin Kirkland, Joe Peterson, all here, getting the day to share it with you. We're open source, live here on the show floor. We'll be right back after this short break.