 Welcome back, everyone. CUBE's live coverage here in Boston, messages Red Hat Summit 2023, as well as AnsibleFest folded in. I'm John Furrier, host of theCUBE with Rob Streche, analyst breaking down all the action with me. We've got a great guest here. CUBE alumni, Shesh Bhande, Senior Vice President and the Chief Product Officer of Red Hat. Shesh, great to see you. We did a little preview interview. We were kind of teasing it out, but now it's happening. Welcome to theCUBE for here, big event. Great, thank you for having me back, John. Good to see you. All right, so slew of announcements. Yesterday was the big day. Day two here, a lot of stuff came out, a lot of Ansible Heavy stuff in the front end. Today's about scale, efficiency, and edge. We talked about one of your customers in the automotive area, energy, open sources booming. What's your favorite announcement? Take us through the highlights. Give us the top, top announcements. Well, it's hard to pick your favorite child. Yeah. I don't want to do that. You could talk about AI, right? You got to talk about AI these days, right? I can't have a conference without an AI conversation. So I think those were really good conversations to have. Lots of interest in that space from us, right? We can see that clearly. I think what's fascinating to also see is some of the areas that you typically don't expect Red Hat to be making announcements in, that we've made some really good forward progress on. So for example, developer experience, I think was great, right? I've actually had customers come up to me and be like, wow, I was so glad to hear you're doing this work off of the backstage project. We actually use it, and now I can go off and start taking advantage of what Red Hat's being to bear. So I thought the work we're doing on the developer side was fantastic. And then another area, which I guess most of you wouldn't call sexy necessarily, but is so critical is on the secure software supply chain. I know you had Vincent Dan and come on, talk about that as well. Critical work and it's work that we've been doing inside of Red Hat that we're now going off and exposing their customers. Well, Christian Uber, his name's Uber, he's in the automotive area, was ironic, but he's one of your top customers. And he's doing the edge. He actually brought up that they're trying to enable, bring your own foundation model into the automotive. And he highlighted the transparency of the software supply chain as one of the core elements. This has been a big conversation in open source communities around the visibility, is it accurate, the changing dynamic nature of what's in the bill of the materials for the code? Why is it so important as a product owner looking at this, staring down the barrel of this really important area? What's going on? What's the priorities? What don't people know about? I think the ability for us to really dramatically reduce the amount of time it takes to start onboarding the application environments is huge. So if you say, wow, something like this took me weeks if not months to do, can I get this in days? Something that took me multiple steps to do, can I do it in a few clicks? 600 plus lines of YAML code to onboard an application and bring it in. Can I do that in significantly less time? That's our vision around that. So that's number one. Number two, once you've done that, on the other end of it, you're going to make sure you're auto generating, for example, software builds of material, S-POMs. Critical from a customer perspective to do. And then be able to also incorporate all the other technologies that companies find so valuable. For example, we've invested pretty heavily with others in the community in SIGstore. Be able to start incorporating digital signing and verification of signatures at station and provenance of software components all the way through supply chain. Again, it's plumbing work. It's kind of work Red Hat does, right? This is the chopwood carry water work that we do, which we think is critically important that we can now make available to customers. I think that that has been one of the themes that I've picked up on this week has been how do you make it simpler for people? I think coming out of KubeCon where they're still doing a lot of YAML code up on their keynotes, I thought it was great when during your keynotes yesterday where it was like, hey, look, no YAML for the data scientists, for OpenShift data science to then integrate in back into that platform and back in through Ansible to go and do that. Has that been really a big goal of yours is that there's new personas coming into, especially at the platform engineering, as you kind of mentioned yesterday, is that really one of your key personas that you're naming? I think you caught on that real well, because the point of, let's say, an Ansible Lightspeed is to say, well, you're stuck with the YAML, how can we make it easy so you can put in plain text and then get something back? Conversation with someone is like, well, can't I do that in chat GPT? Well, sure you can, right? But now being able to give you something that's domain-specific is much more powerful. The next step that actually customers are asking us about is to say, well, I've got a bunch of playbooks already built in-house, right? Large Financial Service customer, big Ansible user, can I train that model directly in-house? So now we give you both, right? We give you OpenShift data science, right? For our OpenShift AI solution for the MLOps platform, run your own model, customize in that environment, and then on the other end of it, make sure that you can use it, for example, directly apply it to areas like Ansible. Rob was talking last time we were in Vancouver for OpenShift Summit about YAMLs in every demo, and yesterday you commented, finally, no YAML in a demo. Why is YAML SoCHA hanging around or when it comes to interface? And why does it take so long to kind of abstract it away because it's super important. Everyone recognizes that, but then it's also like a lot of syntax, like what's the, is there going to be an abstraction layer? Is it going to be voice? Well, we always do that, right? We always provide an abstraction layer, and then immediately people are like, wow, that's really opinionated, that doesn't really necessarily work exactly my way. What can I change or tweak? And the moment we say that, now you've got to get away a little bit from the GUIs, right, give people access to CLIs, and then start tweaking the code. There's a balance to be had, and we're seeing that clearly customers have gotten comfortable with SaaS, but I think of that really more as end user applications. The moment it becomes related to anything around infrastructure or backend, customers want some amount of... Oh, they want to get in there, they can get in there too themselves. Sure, that user interface is key. And we're an open source company, right? So our fundamental belief is, hey, code's open, you're not be able to use it. Hey, command line, it's interface, whatever your choice is, I get that. How about the developer hub? Because that was an announcement. I want to know more about that. What's the positioning there? Is that going to be like a single source of truth? What's the positioning of it? Explain how you see that rolling out and being used by customers. So that came to us via straight out customer demand. So, John, I know you were at KubeCon, right? KubeCon is when we announced some upstream participation from us, right? In typical Red Hat fashion, right? Make sure we're going into the upstream. In backstage, yeah. Sorry? In backstage. In backstage, yeah, exactly right. In backstage, right? And then once we can do that, now we can go off and say, we'll provide a distribution for you, and then we'll provide a set of plugins for you that'll make it easier for you to use a set of commonly asked tools from our customers, both within the Red Hat portfolio, but also outside that. So we'll go through a period of maturation with it, right? Just like we've historically done with every open source project. Great, great example is event of an Ansible. First time we released that was a service preview at Ansible Fest last year, and now it's got to GA. Get customer feedback, get input, gets through maturation, and then we'll take it out to GA. And I think we'll go through the same process with Developer Hub. I mean, I just ran into a couple of customers who said, hey, I didn't know if you knew. We're actually using backstage. You know, so happy to hear you've got this. I'm going to start playing with it. Yeah, and what was the big attraction for backstage? The audience, that's not familiar with it. Why are people liking it? What was resonating? And where's the future going to go in your mind? I think these are a couple of factors. One is with the pandemic, distributed workforce, right? Developers were much more spread out than before, right? So you got distributed environments, and then also just almost a cognitive overload. Lots of tools coming their way. So one, how do you make sure teams that are spread apart can use something consistently? Two, how can you have a plugin model that allows for new tools and so on to be burned to bare? And then of course, do it in a way that people can feel like, hey, I'm doing something that's accessible to me as being built by other folks with like-minded issues and be able to engage in it. And so I think that's really how it started getting a lot of traction. I think what's interesting is that, and we had had some discussions with some of the ecosystem partners as well. And I think it brings that cloud-like development environment to everybody. And I think that a lot of the clouds, they all have their version of it, and that's their control plane, and almost a control point for keeping you in that ecosystem. Have you seen customers asking for that? And I mean, you're everywhere. I mean, the open source is one, and open shift is pretty much in every place, including not just the big hyperscalers, but even the regional sovereign clouds and things of that nature. Has that really been a push? Well, actually it's both, right? So one is we obviously want to make it really easy to use with OpenShift, but use it with other environments too. I mean, we made a similar announcement, for example, with Advanced Cluster Security as a service, right? So that's the technology we got with StackRock's acquisition. Use it via OpenShift, but use it in other non-OpenShift Kubernetes environments as well, right? So the ability to give folks choice and say, you know, use the environment that you're most comfortable with, but then we will support you. And obviously, hopefully the set of products that we're releasing, we want to make sure we have an integrated experience with, is a powerful thing, service interconnect, right? Yet another example, right? Which is, you know, developers say, I've got application microservices, you know, in this distributed environment, these clusters running in all these different places, some of which are OpenShift, some of which are other Kubernetes environments, how can I connect those up, right? And the OpenShift AI, that's really the generate AI is for hybrid cloud. That seems to be the positioning for the OpenShift piece. To be able to run generative AI models on OpenShift AI, right? So OpenShift AI is the ML. So the analogy would be, back in the day, you've got, you know, a variety of different proprietary OSs, right? Now you've got Linux. Back in the day, you've got, you know, a bunch of proprietary, you know, query distributions, right? And we brought to bear OpenShift. The same, for example, on the ML ops front, right? You're going to have a bunch of folks wanting to run, you know, PyTorch, Kubeflow, you know, any Anaconda, Starburst, a whole host of ML tools, frameworks, and so on. How about we provide them a standard foundation to be able to run them consistently, which is done in an open fashion? And the use cases there are what? Specifically, the opening up? Use cases are varied. I mean, we've got customers today, you know, just to be clear, we're already running AI ML applications, right? An innovation award this year was given to Banco Galicia, right? What was that for? That was for running NLP, right? So basically running natural language processing to cut down significantly the amount of time to open accounts, right? And do verification of that, doing that in an open shift. The beauty is now, you know, let's give them a foundation to be able to have a way to use a variety of different tools, be able to do model serving, model training, life cycling of them, and be able to do that consistently over time. I have to ask you as the product leader, chief product officer, you get the keys to the kingdom, meaning you've got the engineering behind you, requirements for engineering, as well as customer, which is open source and customers. So you got kind of like multiple front end stakeholders. Yeah. If you're asked, what's Red Hat's top three bets? What are you guys betting on? I mean, obviously open source has been the big bet, that's kind of standard. What are you betting on from the products going forward? Because as customers look at all the array of announcements, I mean, you've got ecosystem development partners, so you've got partner products, integration, integrating in, you got a lot of AI, you got a lot of blocking and tackling, chopping wood, carrying water, you know, operating new news. Where's the bets? What's the top three bets for Red Hat? So we fundamentally stand behind our open hybrid cloud bet, which is the notion of you have an abstraction that runs regardless of where you want to run it. You saw that in keeping on investing, for example, in REL, making that available in public market places from all the hyperscalers. So we do that with that. OpenShift is made available, self-managed, or as a cloud service, you know, run that with any hyperscaler that you want, consistent, add edge to that. So now you can say, well, is that a separate bet, is that one, you know, we can, you know. It's an extension. It's an extension, right? And it's a continuum, right? For us, right? So okay, so now you say, well, our open hybrid cloud, you know, stretches out to the edge and make sure that, you know, we're providing that in a consistent fashion. I think we feel pretty good about that. So that's one. Two, now you say, well, now we're doing all of that. I've got all these applications running here. How can I do that in a more intelligent way? I can do it in a smarter way, right? That's where the AI that comes in, right? We're not in the business of, you know, producing series of models, right? Let the hugging faces of the world and the others, you know, do that, right? We want to make sure we are supporting that innovation. So I think you definitely will see us, you know, have a player on that. And then we want to surround that with a series of services that are critical, both for developers as well as for operators in those environments, right? Developer experience is key, right? So you see us, you know, for us to do that security, you know, it's very important to see us do that management, you know, it's critical, you know, across that, right? And so everything we can do to make it easier for you to want to run the hybrid cloud, take advantage of the intelligence that applications provide, you know, that sort of thing. That's great. Thanks for sharing the priorities. We talked to Matt yesterday. It was interesting what fits in that bet was the conversation we had a panel with Senior Vice President of IBM support, professional services, and a customer. Interesting dynamic, the bet and the extension into the IBM piece is one, multi-cloud came up. So obviously hybrid, in your opinion, extends to multi-cloud. Is that in the same bet category? Yeah. Okay. So the other thing we have to, I'd love to get your reaction is, is multi-cloud multi-environment or multi-cloud as in stacks? So you got Azure, AWS, and we were trying to, and we were, it's nuanced point, we went, we were kind of riffing like, what is that, is multiple environments one thing or multiple clouds a thing? Meaning, okay, I got Amazon, I got Microsoft, they're different stacks, but different environments. So it was kind of a point we were kind of riffing on because on one hand you could say environments are not multiple clouds. Yeah. Interesting point. I have to think about that some more. I think there's a lot of conversions in those, between the multi-environment multi-sets. We've got to be careful too, because the moment you allow a lot of customization, we've seen this historically, and perhaps the perspective of the person who spoke to IBM who's in the professional services business, a different one, because they'll support different choices. If we have too many places that choices have been made, now you're talking about essentially having to support multiple distributors. There's only so many that you can do in a consistent fashion and life cycle. I mean, I met a customer yesterday that we were talking about a 10-year run price, the next life cycle, they're like, can you guys go further? We're like, well, we'll hold on a second. Are we doing DevOps? Can you? Can you? 10 years is pretty good. So we've got that challenge as well, right? We've got customers who want these long life cycles, and if you keep tuning everything underneath, well, we can't. Well, that's a good point. Again, we were getting to the same conclusion of, it's nuance, terminology, semantics, matter, naming, getting confused. But I think at the end of the day, we're seeing multiple clouds in a customer environment. And so you're seeing hybrid, certainly is one, one the day, good check. Now that extends out into the edge, and you're dealing with multiple environments. So again, we stay with the environment term, but I mean, I don't know what you think, Rob, but I mean, it's a nuanced point, but people are building the stacks inside the middle layer, so if they're on Amazon and they have customers on Azure, they kind of do it under the covers. And I think we had some ecosystem discussions with Stephanie yesterday talking about how, and some of the others across the ecosystem with Jeremy Winters was here yesterday as well, about how it may be, hey, I have this stack, becomes my infrastructure, and here's the pieces I'm using from Azure, that are Azure-native and things of that nature, which I think it seems like that's where you're meeting that ecosystem and kind of looks like almost a mantra, because even on the security side, and the S-volume side, it seems to be that way as well. Agree 100%, right, and the relationship that we have with hyperscalers like both Microsoft as well as Amazon is critical in this. I mean, you're seeing them come and talk about, and I think you saw in the keynote too, right, they're talking about the long relationships that we've got with them, that we were going to work closely with them, that's the only way to do it. It's not easy, I'll say that, right? Because, you know, we got to do it. And you had, we were going on stage, she's the head of partners with Stephanie, that was a huge relationship. You have Dell making an appliance, that's pretty big news actually. The Red Hat appliance? 100% you're right, actually, maybe we didn't even talk about that a whole lot, but yes, that's another one. And then, you know, I met some other partners, they're like, hey, we want to be up on stage as well, talking about all the work we're doing with you. So, making sure that we're not at the end of it, spreading ourselves so thin, that then the customer's like, well, I'm getting a suboptimal experience, I think that's really important. What's your message to the Ansible community out there? They've done a great job. Again, that community has grown and been thriving. The Ansible Fest was folded into Red Hat. The rationale there was to kind of keep a bigger, broader piece of it, right? Bring the products together. What's the product synergy? Obviously, pretty notable on day one, it's Ansible Heavy. Yeah, well, one, automation is really important to us because we're seeing how important it is to our customers, right? Especially with our day one message about becoming much more efficient. So that's critical. Related with that is we're keeping on adding capability into core Ansible. Event-driven Ansible is a great example of that, right? Hey, you don't have to necessarily pay us more. We'll just keep, you know, building to it. But, I mean, the third one, which, you know, personally is really exciting to me, right? This is the part where I think of, you know, AI as being magical, right? Is doing things like lightspeed that really, really reduce the barriers for someone to use. I mean, Ansible is easy enough. I mean, that's one of the reasons why it's so popular. It's easy enough to use, but being able to essentially, what I think of almost like Khan Academy plugged right into a console, I mean, how amazing is that? It's funny, we're going to look back some of the cube videos we did two years ago, a shash around Ansible. We're like, this is the future. I mean, we were basically saying back then that the Ansible users end up ruling the kingdom. Turns out that automation is actually crossing over mainstream. Share your perspective to the audience on how automation will unfold, because it's not just automated configurations. That's easy, not easy, low-hanging fruit, I should say. But where does it go next? What's the next phase? Yeah, great question. So before it goes next, what we also want to make sure is that all the areas that we're releasing into start getting adopted. So that's been one of the biggest points of conversation I'm having with customers. Because I said, we're releasing all these capabilities, whether it's with Ansible LightSpeed, whether it's with automation, we're doing other works on AI and security. We're doing much more work for Ansible from a security perspective. I want to make sure that these capabilities are actually used. Because someone in product, the thing that's most exciting to us is release something that's getting used. And if it's not, get the feedback across to us as to, hey, these parts are useful, these other parts aren't. So we can go back and then go off and iterate and then change them. So for me, from this point on here, it's like, hey, this Red Hat Summit released a bunch of technology, hopefully got people really excited about that. Now I want to spend the time making sure users engage in it. Your momentum with Red Hat has been pretty spectacular. We've been following it for many, many years, as you know. It's almost got that cadence of AWS now with the news. I mean, Amazon's got a zillion announcements, but you're starting to see a lot more diversity and a bigger aperture with IBM, bringing together more customer use cases. You get a lot of products. You're like, what's it like running the show over there? Take us through the day of the life. What goes on? Is it going to get bigger? How fast is it going to expand? What are you expecting to see from a product standpoint? Well, the good news is, there's a huge amount of customer interest, right? So now the question is, how do we work substantially in areas to make a difference? What we've changed, I think in many ways, is trying to get customers much earlier into a cycle. So I don't know if either anyone from ABB or from- Just earlier today. You just came earlier, right? Great example. Both partners, ABB as well as ATOS of Bosch, we've been working with them now for many months. And the product that we're talking about, that hasn't hit GA yet, that's going to do in second half of the year. We're just starting to involve our ecosystem earlier on the process. I think that's really changed. In the past, it was only open source community and then you get feedback. Now we're like, look, how do we actually get folks who are delivering value to their customers into an involvement? It's this notion of co-creation earlier on. They're building on top of Red Hat. They had the Red Hat Vehicle Operating System. They're hardening the top to create more development in cars. So that's a mind-blowing example we had there. Yeah, and with the Edge Genius out of ABB, which was really good, because I've been in the oil and E&P part of oil and exploration. And it was very interesting to hear them talk about, because you're talking about critical infrastructure as well. I mean, you had the life safety aspect of cars, but also in the critical. They're talking about robotic arms on factory floors. They're talking about wind farms. I mean, just, I mean, the amounts of use cases are phenomenal, right? You saw in the keynote demo and the bottling plant. Now we can, you know, put that. You got a great job. What can people expect from you going forward? What's your priorities, the folks watching? Give it a plug for your organization. What are you guys working on? What's on the roadmap? What are some of your priorities? Well, I'll start with, you know, we want to make sure we're staying true to our heritage, right? You know, it's three decades in open source. Ensure that we're true to open source. We're true to the community, helping bring that forward, right? So I think that's number one. I want to make sure that, you know, folks don't think, you know, as we sort of expand on. Number two, there's a lot of different technology areas and, you know, the temptation, right? Especially from a product perspective, I think about if you ask me what keeps you up, oh my gosh, will we spread ourselves to 10, right? Because, you know, we have so much interest in each of the areas, but the question is, you know, you've got to be, it's necessary and sufficient. You've got to be able to balance both of those. There's so many great opportunities. Don't jump at too many of them. Exactly, right? And so how do we make sure that, you know, we're going at the right gains? Like you want to be ahead of, you know, what customers want, but not too far ahead, right? And so being able to kind of balance those as we go advance our roadmaps, I think that's super high priority for me. Yeah, and the growth is coming. And I think AI, Matt Hicks was pointing out that he thinks that the AI trajectory will match the trajectory of Linux, but much faster. And he was on the quote, yes, though, I thought that was a quote of the week. And I agree with him. I think the AI is going to have such an influence, not only to RedHap, but the community. I mean, we saw that at KubeCon, we saw it at Open Source Summit. And all the developers, it's another tool to them. They like it. Now we're like, okay, hold on. Let's see how it goes, but good stuff. That's just, thanks for coming on theCUBE. Really appreciate it. Thanks very much for having me on. Congratulations. Okay, Kube coverage here. Day two, RedHap Summit continuing. I'm John Furrier, Rob Stretcher. We'll be right back with more after this short break.