 Hello and welcome to this CubeCon and CloudNativeCon preview on theCUBE. I love the naming conventions. I'm Dave Vellante, Chief Analyst with Cube Research, and I'm joined by Joe Fernandez, VP and GM of Hybrid Platforms at Red Hat. Red Hat's the gold standard for open source contributions and has the premier business model and is really the envy of open source community. Joe, thanks for spending some time with us, really appreciate it. Yeah, thanks for having us, Dave. Yeah, you bet. All right, so Red Hat's a diamond sponsor at the CNCF, CloudNativeCon, CubeCon. Of course, you're an anchor sponsor of theCUBE as well. Thank you for that. You guys are really always supporting the open source community and helping us educate and inspire people. What are you looking forward to at this year's event? Yeah, so we've been part of the Kubernetes community since the beginning, right? I think it's coming up on night, just hit nine years this summer since Kubernetes was launched. It'll be the 10-year anniversary next year. I think CubeCon's been going on just as long. So first and foremost, looking to see what are people working on at Kubernetes, right? That's still the anchor project. But what's been happening is, there's this whole vibrant ecosystem of projects that continues exploding around Kubernetes, right? So different areas, whether that's management or security or AI or DevOps and so forth. And so really interested to see that because as Kubernetes has gotten more stable, the pace of change has continued to slow down. That's a good thing. That means more stability for the core. But then what's taking its place is just a growing number of open source projects and new innovations that are solving problems in overall in this cloud-native ecosystem. And it's definitely getting more stable. You say that, I remember originally, you go back eight or nine years. I mean, it was pretty basic and the focus was on simplicity and getting adoption up. And then they made some, the committers, the original committers, they made some tough decisions. Let's mature it over time. Let's not try to do too much at once. And now you're seeing the impact of that. You mentioned projects. Are there any specific projects that you're really excited about or you want to double click on when you're in Chicago? Yeah, I mean, so yeah. A ton of interesting projects going on, right? Like from products that are close to the core around observability, Prometheus, and things like Yeager for tracing and so forth, you open tracing project, things around DevOps, right? So Tecton, which is what we use to drive open shift pipelines. Argo CD is an interesting project. We're seeing a lot of adoption of GitOps and Argos right in the center of that. And then newer things around security, things like, you know, things around securing the software supply chain. So SigStore is a project that we're involved in around, you know, around artifact signing and helping you build out software build materials. So really it's not one thing. It's just a wide assortment. Yeah, you mentioned several. And I mean, I remember when Argo first hit the scene and he said, wow, this has a lot of potential. You know, SigStore was well excited about that. Joe just got back from a customer trip in Europe. And remember in 2023, we saw this continued deterioration of spending expectations from CIOs. So, you know, we've seen a significant focus on efficiency. So Joe, my question is, how was the community addressed? First of all, you're hearing that. You've probably heard it on your trip. How's the community addressing that? Maybe how Red Hat is helping customers sort of save money in these, you know, tough times. Yeah, I mean, with Struck, you mentioned the trip, right? Like, what struck me is, you know, a couple of things. One is just how, you know, how explosive the growth continues to be, right? So customers who started out with maybe, you know, a single Kubernetes cluster or a small number of clusters, now have continued to grow those clusters and grow the number of clusters as they bring more applications and more use cases to the platform. And so, they need help managing at scale, right? So we're invested in, you know, multi-cluster management solutions, observability, as I mentioned, helping customers manage the scale that they'll need to meet the needs of their applications. The other thing is then, you know, a lot of focus around automation, right? So again, I mentioned Argo CD. I must have heard GitOps, you know, from at least two thirds of those customers and they're using it in different ways. So they're leveraging GitOps to automate the configuration and deployment of applications that they're bringing onto Kubernetes through OpenShift and then they're using it to deploy OpenShift clusters themselves, right? So infrastructure is code type use cases for, you know, automating the deployment of new clusters, addressing things like disaster recovery scenarios. If a cluster, you know, goes down, how do you rebuild it reconstructed? And so, super interested in that because, you know, obviously people have been doing CI CD and automation for a while, but, you know, there's new solutions now, new approaches that are more, you know, Kubernetes native, cloud native, and, you know, it's another area where people are modernizing. Yeah, and you mentioned multi-cluster at scale. I mean, again, that's an example of early on, the community really wasn't really focused on that, but now it's critical in recovery. It's just so complicated recovering from multi-clusters at scale. And that's something that the community has been really hard at work on hardening, you know, Kubernetes and it's just that helps it just keep on going even more mainstream, doesn't it? Yeah, absolutely. And then, you know, within the cluster, things like, you know, what's inhibiting scale, right? So there's been a lot of advances in Kubernetes networking, right? Making sure that, you know, networking across all the, you know, containers and services in your cluster scales to the rate that you need it, but then also, you know, getting traffic into the cluster, right? Your ingress, your load balancers, and then across multiple clusters, right? So, yeah, so scaling in terms of how you're managing it, but then also scaling the capabilities of the platform itself. So you mentioned security, supply chain security, obviously a big topic. Compliance is another one. They're kind of two sides of the same coin privacy and compliance and security. What are you seeing there? Yeah, so, you know, they do go hand in hand, right? So we, you know, obviously Red Hat made an investment in this space with our acquisition of Stack Rocks that drives our advanced cluster security solution. We also brought advanced cluster management as a solution over from IBM. That's part of our OpenShift portfolio. So a couple of things. First, you know, looking at security from, really from the applications, all the, you know, all the way down to the, you know, the core platform itself, Kubernetes, Linux, kernel and so forth. So that's what we're doing, you know, with ACM and ACS is basically helping people manage, you know, to ensure the security and the compliance of their platforms, whether, you know, whatever, if they have CIS benchmarks, PCI compliance and so forth, they want to make sure that they're running on a compliant platform. And then, you know, on the applications, you know, moving from sort of, you know, just vulnerability scanning of your container images to being able to look at running containers and look for vulnerabilities there, but then also shifting that left to build that into your CI processes. And so what we're seeing is, you know, things like Tecton and Argo come together with Sigstore to create, you know, more automated process for, you know, securing the supply chain so that when you're building those images, you're building them securely upfront and then you're building, you know, software build materials around the contents of those images so that you can ensure that is secure once you put it out to production. And simpler, this is a critical point because the cloud created a new layer of kind of the first line of defense, if you will, in security. And it also created more complexity because you got shared responsibility models, you got multiple clouds. And then coming into the organization, DevOps is being asked to do, the development team being asked to do more, you know, shift left, you call it, right? So securing the code. It's kind of not their wheelhouse traditionally. So it's got to be simple. It's got to be, you know, help them be as accurate as possible. What's the sentiment amongst developers? Do they, is the sentiment like, okay, I got another thing to do. Or is it, hey, I understand the importance of, I know they understand the importance of it, but it's got to be made simpler. And that's really the initiative. Yeah, I mean, I think we're putting a lot on developers these days, right? Like, you know, we moved from Dev and ops to the whole DevOps movement. And we're like, okay, now it's a DevSecOps. You got to think about security and so forth. And, you know, it really puts a lot on the developers plate when they have to start thinking about all these things. Because ultimately their focus is, how do I build this application to solve this business problem? You know, that's what they're thinking about first and foremost, not all the operational or security concerns, right? So I think that's where a platform approach can help. You've seen that in the rise of the, you know, platform engineering as, you know, as both a discipline and something that many customers are asking about. So the platform engineering team's job is to provide platforms and services to those developers so that they can, again, focus on, you know, what they do best and leverage the platform and a platform-based approach to manage things like, you know, operational concerns, security concerns, scale and so forth. So that they don't have as much on their plate and they can get to productivity much faster. So you think we're going to see the DevSec data AI ops soon? Is that what's coming? Yes, for sure. Yeah, I think, you know, AI just adds another wrinkle to that. What were the conversations like in Europe around AI? Yeah, so I mean, I think AI, obviously, you know, it's, you know, with the, you know, chat GPT, generative AI, large language models, AI has been around for a long time, but, you know, over the last year, it's certainly exploded. A lot of that focus right now has been on the consumer side, you know, what's Microsoft doing with OpenShift, open AI? What's Google doing in response? Is this going to be built into search? So it's really consumer-oriented, but we know it's going to have an even, you know, just as big or greater impact on the enterprise, right? So I think for enterprises, it's part of their modernization strategy, right? So enterprises for the last decade have been focused on how are they going to not only build new applications, but modernize the applications that they have in place, right? And so largely they're moving from traditional, whether it was monolithic or end-tier style architectures like Java, E or .NET to new microservices, cloud native style architectures. That is still ongoing. There's still a lot more legacy applications in the enterprise than there are cloud native apps. But now you have a new problem to solve, which is how can I infuse more intelligence into those apps leveraging the data that I have and using AI as an enabler for that, right? So it becomes another characteristic of the application and it's something that we're doing in our own application. So I have projects in both OpenShift and Ansible to bring AI in to provide a better experience to my end users. Customers want to do the same things for their applications regardless of the industry that they're in. And as I mentioned up front, of course, we all know about the cost pressures these days. It's not like the top line IT spend is, it's not like CEOs are throwing money at the IT department and say, oh yeah, go do AI. Essentially what's happening based on the data that we see from our ETR partners is people are AI spending going up, everything else is maybe still soft but stealing from that or trying to, so there's a lot of experimentation going on. Did you see that in Europe, that kind of focus on experimentation for AI? Are they moving into production? Are you seeing any sort of activity in the US that gives you sort of an indication that we're going to start seeing sort of that AI tied, lift all boats? Yeah, so I mean, I think we'll see, and I can't really comment on the macro economy, but certainly everybody's under pressure to do more with less, right? And I think Europe's no different from what we see here in the States. But they also know they can't stop innovating, right? So they have to make prioritization decisions and trade-offs, right? And nobody's, you can't just put a major technology breakthrough like generative AI aside and say, well, I'll get to it when I can afford to spend there, right? So yeah, I think those trade-offs are happening and people are figuring out how to free up resources and who's gonna do this work and whether they're gonna build or buy managed services and so forth. So yeah, certainly seeing that happen. But then also, seeing that the same teams that are enabling those application developers are gonna be asked to enable the data scientists as well, right? Today, companies have hundreds, if not thousands of developers for some of our largest customers. And so they need those teams, the DevOps teams, the platform engineering teams to support those developers where they in the past may have had a small number of data scientists by proportion as investment grows, those teams grow, right? And so those teams are gonna need a lot of the same access to infrastructure, access to tools, figuring out how to automate what they do. And so you can start hearing more about MLops types workflows. And then just like applications, they need to run that stuff everywhere, right? Even more so. So I think AI is one of the killer workloads for the hybrid cloud because AI needs to run where your data lives and data lives everywhere. Yeah, so I wanted to ask you about your hybrid and your title. Certainly in the pandemic, we had the force march to digital and the cloud was critical and Kubernetes was critical in terms of moving workloads to the cloud super helpful. And now we, I love your thoughts on this. The data that we're seeing says, well, we're kind of reaching, I wouldn't say an equilibrium. I mean, cloud still outpacing, cloud native still, and public cloud still outpacing sort of on-prem, but we're definitely seeing a more of a balanced approach. And Kubernetes of course, and containers, support that balance by sort of making it simpler to move stuff where it belongs, right? And move the work to the data as you're pointing out. What are you seeing in terms of that hybrid equilibrium? Yeah, so we've been talking about open hybrid cloud now for more than a decade, right? Like, we're working with customers to bring them, help them accelerate their move of applications to the public cloud. But we always knew that there wasn't going to be one destination for all enterprise apps because enterprise customers have thousands and tens of thousands of applications. So we're seeing certainly growth in the cloud that's outpacing growth in the data center, but we're still seeing a lot of gravity in the data center. And then as people move more aggressively to the public cloud, you see just a growing number of multi-cloud strategies. So more and more customers that I run into that not only have large contracts with one provider, say an Amazon, but with AWS and Azure, Azure and Google, or Google and AWS and so forth, IBM cloud in that mix, Halibaba, other providers as well, regional, especially in places like Europe and Asia also. So multi-cloud is a thing now. Edge is becoming a thing in terms of moving those workloads out to factory floor for manufacturers or out to retail locations or out to cell phone towers for our telco providers. And so that's just on the application side. Then you basically say, okay, now you're introducing AI as an application. What's the benefit of AI, right? It's working on your data, right? So if you have to collect data and ship it back to your data center or back to the closest AWS or Azure region to process it, you're incurring latency or incurring costs. And again, ultimately you're slowing things down, right? So people want to move those AI workloads right to where the data is generated. So that's, again, in the Edge example I mentioned, we're working with manufacturers like ABB and Bosch and so forth around factory floor automation and how to bring an Edge platform to where manufacturing is happening. But ultimately what's going to run on that platform predominantly is going to be things that work on the data that's being generated there. And that's- I'm glad you brought the Edge. Because I was going to ask you, that's part of your scope in hybrid. And you're seeing different infrastructure requirements, but your software can go there. Yeah, yeah. I'm not sure every data center box is going to go there. Actually, I'm quite sure they're not every data center box is going to go there. We're seeing a whole suite of new types of system on chips and arm and low cost and low power. It's a dramatically different requirement, but it's interesting how open source generally fits and certainly Red Hat's product, whether it's Linux or OpenShift fit in that environment as an enabler. Yeah, no, I tell customers all the time, right? I don't know what workloads they may run at the Edge, but I know one thing that'll be right in there, it's Linux, right? And Red Hat's been the Linux company for 30 years, right? So Linux is going to be that platform where that people want to run for their Edge workloads. Linux, these days means Linux containers. Linux containers brings you to Kubernetes, right? And so we have a solution for both Edge servers and then we're just launching a new solution this fall called Red Hat Device Edge that'll allow you to run containers on Edge devices, right? And so Edge devices would be IoT, IoT devices, gateways and such. And again, this is what customers are demanding. And what we need to do is figure out how do we shrink that platform down to fit the footprint where it needs to run? So if you're running on an Edge server, that's going to be a single node, OpenShift, Kubernetes cluster running at that location. If you're on an Edge device, it might just be Linux itself. So we have Ralph for Edge as part of Red Hat Device Edge where you can just bring a container directly to Linux or if you need Kubernetes to manage those containers, we have a form factor called MicroShift which is a tiny version of OpenShift that runs on Ralph for Edge, again, for those IoT device type use cases. And so we're definitely seeing that and then management comes with that because you have to manage all those deployments at scale. It is one of the biggest trends that we're going to see in the next 10 plus years. The Edge is going to explode. It's an enormous market. The economics are going to shift. And of course, as it always happens, it'll creep back into the enterprise and disrupt things again and open source will be there. Absolutely. And it's just another footprint in the hybrid cloud, right? It's just another example of the fact that applications aren't all going to run in one place. They're going to continue to go where they need to go to serve the needs of the business. Well, Joe, welcome back to the States. Thanks so much for spending some time with us. Really appreciate it. Thank you so much. Thanks for having us. You bet. Okay, KubeCon and CloudNativeCon November 6th through 9th in Chicago. Kube will be there. Stop by and see us. This is Dave Vellante. We'll see you next time.