 Welcome back to the seaport in Boston, Massachusetts, with cities crazy, with Bruins, and Celtics talking, but we're here, we're talking Red Hat, Linux, OpenShift, Ansible, and Ashash Badani is here, as the Senior Vice President, and the head of products at Red Hat, fresh off the keynotes, had AmEx up in the state of Ashash. Great to see you, face-to-face. Amazing that we're here now, after two years of the isolation economy. Welcome back. Thank you, great to see you again as well, and you as well, Paul. Yeah, so no shortage of announcements from Red Hat, this week. Paul wrote a piece on siliconangle.com. I got my yellow highlights, I've been through all the announcements. Which is your favorite, baby? Hard for me to choose, hard for me to choose. I'll talk about REL9. By REL9, it's exciting. And in a weird way, it's exciting because it's boring, right, because it's consistent. Three years ago, we committed to releasing a major REL every three years, right? So customers, partners, users can plan for it. So we released the latest version of REL, in between we're delivering releases every six months as well, minor releases. A lot of capabilities that are bundled in around security, automation, edge management, and then REL is also the foundation of the work we announced with GM, with the in-vehicle operating system. That's extremely exciting news for us as well and the collaboration that we're doing with them. And then a whole host of other announcements around cloud services, work around DevSecOps and so on. So yeah, a lot of news, a lot of announcements, I would say REL9 and the work with GM probably comes right up to the top. I wanted to get to one aspect of the REL9 announcement. That is the role of CentOS screams in that development. Now in December, I think it was Red Hat discontinues development or support for CentOS and moved to CentOS streams. I'm still not clear what the difference is between the two. Can you clarify that? I think we go into a situation, especially with many customers, many partners as well, that didn't quite exactly get a sense of where CentOS was from a lifecycle perspective. So was it upstream to REL? Was it downstream to REL? What's the lifecycle for itself as well? And then there became some sort of implied notions around what that looked like. And so what we decided was to say, well, we'll make a really clean break and we'll say CentOS stream is the upstream for Enterprise Linux from day one itself, partners, software partners, hardware partners can collaborate with us to develop REL and then take it all the way through lifecycle, right? So now it becomes a true upstream, a true place for development for us. And then REL essentially comes out as a series of releases based on the work that we do in a fast-moving CentOS environment. But wasn't CentOS essentially that upstream development environment to begin with? What's the difference between CentOS and CentOS Stream? Yeah, it wasn't quite upstream. It was actually a little bit downstream too. Yeah, so it was kind of bi-directional. Yeah, and so then that sort of became an implied lifecycle to it when there really wasn't one, but it was just became one because of some usage and adoption. And so now this really clarifies the relationship between the two. We've heard feedback, for example, from software partners, users saying, hey, what do I do for development? Because I used CentOS in the past, so we were like, yep, we have REL for developers available. We have REL for small teams available. We have REL available for nonprofit organizations. And so we've made REL now available in various form factors for the needs that folks had and they were perhaps using CentOS for because there was no such alternative for REL history. So language, so now it's this clarity. So that's really the key point there. So language matters a lot in the technology business. We've seen it over the years. The industry coalesces around terminology, whether it was the PC era, everything was PC this, PC that, the internet era. And certainly the cloud, we learned a lot of language from the likes of AWS two-pizza teams and working backwards and things like that became commonplace. Hybrid and multi-cloud are kind of the parlance of the day. You guys use hybrid. You and I have talked about this. I feel like there's something new coming. I don't think my term of super cloud is the right necessary terminology, but it signifies something different. And I feel like Ashesh, your announcements point to that. Within your hybrid umbrella, point being so much talk about the edge. And it's, we heard Paul Cormier talk about new hardware architectures and you're seeing that at the edge. You know what you're doing with the in-vehicle operating system. These are new. The cloud isn't just a bunch of remote services in the cloud anymore. It's on-prem, it's a cloud, it's cross-clouds. It's now going out to the edge. It's something new and different. I think hybrid is your sort of term for that, but it feels like it's transcending hybrid. What are your thoughts? Really, really great question. Actually, since you and I talked, Dave, I've been spending some time sort of noodling just over that, right? And you're right, right? There's probably some terminology, something sort of that will get developed either by us or in collaboration with industry. We sort of almost have the next, almost like a meta cloud, right? That we're sort of working our way towards because there's, if you will, the cloud, right? So on-premise, virtualized bare metal, by the way, is reasonably interesting and important. We do a lot of work within Vidya. Folks want to run specific workloads there. We announced support for ARM, right? Another now popular architecture, especially as we go out to the edge. So obviously there's private cloud, public cloud. Then the edge becomes a continuum now on that process. We actually have a major shipping company, so cruise lines that's talking about using OpenShift on cruise lines, right? So that's the edge, right? Last year we had Verizon talking about, you know, 5G and, you know, RAN and the next generation there. To them, that's the edge. When we talk to retail, the store fronts the edge, right? You talk to a bank, you know, the bank environment's there. So everyone's got a different kind of definition of edge. We're working with them. And then when we, you know, announce this collaboration with GM, right? Now the edge there becomes the automobile. So if you think of this as a continuum, right? You know, bare metal, private cloud, public cloud, take it out to the edge. Now we're sort of almost, you know, living in a world of, you know, a little bit of abstractions and making sure that we are focused on where data is being generated and then how can we help ensure that we're providing a consistent experience regardless of, you know, where that's. I like MetaCloud because I can work in NFTs. I can work in Metaverse. We're going to get to this whole thing without saying Metaverse, I was hoping. I do want to ask you about the edge and the proliferation of hardware platforms. Well, Kermi mentioned this during the keynote today. Hardware is becoming important. There's a lot of powerful hardware. It's in development now for areas like, like intelligent devices and AI. How does this influence your development priorities? Do you have all these different platforms that you need to support? Yeah, so we think about that a lot, mostly because we have engagement with so many partners in hardware, right? So obviously there's the more traditional partners I'd say like the Dell and the HPEs that we work with, if historically work with them, also working with them in newer areas with regard to appliances that are being developed. And then the work that we do with partners like NVIDIA or new architectures like ARM. And so our perspective is this will be use case-driven more than anything else, right? So there are certain environments, right? Where you have ARM-based devices, other environments where you've got specific workloads take advantage of being built on GPUs that will see increasingly being used, especially to address that problem and then provide a solution towards that. So our belief has always been, look, we're going to give you a consistent platform, a consistent abstraction across all these, you know, pieces of hardware. And so you, Mr. and Ms. Customer, make the best choice for yourself. A couple other areas we have to hit on. I want to talk about cloud services. We got to talk about security. We'll leave time to get there. But why the push to cloud services? What's driving that? It's actually customers that are driving, right? So we have customers consistently been asking us, say, you know, love what you give us, right? Want to make sure that's available to us when we consume in the cloud. So we've made relevant, for example, on demand, right? You can consume this directly via public cloud consoles. We are now making available via marketplaces. Talked about Ansible, available as a managed service on Azure, OpenShift, of course, available as a managed service in multiple clouds. All of this also is because, you know, we've got customers who got these committed spends that they have, you know, with cloud providers. They want to make sure that the environments that they're using are also counting towards that. And at the same time, give them flexibility, give them the choice, right? If in certain situations they want to run in the data center, great. We have that solution for them. Other cases they want to procure from the cloud and run it there, we're happy to support them there as well. Let's talk about security, because a lot of it now, I mean, it's like security everywhere. And then some specific announcements as well. I always think about these days in the context of the solar wind supply chain hack. Would this have, you know, how would this have affected it? But tell us about what's going on in security, your philosophy there, and the announcements that you guys made. So our secure announcements actually span an entire portfolio, right? And that's not an accident, right? That's by design, because, you know, we've really been thinking and emphasizing, you know, how we ensure that security profile is raised for users, both from a malicious perspective and also helping accidental issues, right? So what matters? So one, huge amounts of open source software, you know, out of the world, you know, and then estimates are, you know, one in 10, right? Has some kind of security vulnerability in place. Massive amount of change in where software's been developed, right? So rate of change, for example, in Kubernetes is dramatic, right? Much more than even the Linux, right? Entire parts of Kubernetes get rewritten over three or four times. So as you introduce all that, right? Being able to think, for example, about, you know, what's known as shift left security or DevSecOps, right? How do we make sure we move security closer to where development is actually done? How do we ensure we give you a pattern? So, you know, we introduce a software supply chain pattern via OpenShift, delivers complete stack of code that, you know, you can go off and run, that follows best practices, including, for example, for developers, you know, with GitOps and support on the pipelines front, a whole bunch of security capabilities in RHEL, a new IMA's Integrity Measurement Architecture, which allows for better ability to see in a post-install environment what the integrity of the packages are, content-signing technology, they're incorporating OpenShift as well as Ansible. So it's a long list of capabilities and features, and then also more and more defaults that we're putting in place that make it easier, for example, for someone not to hurt themselves accidentally. On security front, I noticed that this day's batch of announcements included support with an OpenShift pipeline for SigStore, which is an open source project that was birthed actually at Red Hat. We haven't heard a whole lot about it. How important is SigStore to your future product direction? Yes, look, I think of that as, you know, work that's being done out of our CTO's office, and obviously security is a big focus area for them. SigStore's a great example of saying, look, how can we verify content that's in containers, make sure it's visually signed, that's appropriate to be deployed across a bunch of different environments. But that thinking isn't maybe unique for us in the container side mostly because we have two decades or more of thinking about that on the rail side. And so fundamentally, containers are being built on Linux, right? So a lot of the lessons that we've learned, a lot of the expertise that we've built over the years in Linux, now we're starting to use that same expertise, starting to apply it to containers, and my guess is increasingly we're going to see more of the need for that into the edge as well. I picked up on that too. Let me ask a follow-up question on SigStore. So if I'm a developer and I use that capability, it ensures the provenance of that code. Is it immutable? The signature, and the reason I ask is because again, I think of everything in the context of the SolarWinds where they were putting code into the supply chain and then removing it to see what happened and see how people reacted, and it's just a really scary environment. Yeah, the hardest part in these environments is actually the behavior change. So what's an example of that? Packages built, verified by Red Hat. When it went from Red Hat to the actual user, have we been able to make sure we verified the integrity of all of those when they were put into use? And unless we have behavior that makes sure that we do that, then we find ourselves in trouble. In the earliest days of OpenShift, we used to get knocked a lot by developers because they said, hey, this platform's really hard to use. We investigate, hey, look, why is that happening? So by default, we didn't allow for root access. And so someone's using the OpenShift platform, they're like, oh my gosh, I can't use it, right? I'm so used to having root access, we're like, no, that's actually sealed by default because that's not a good security best practice. Now over a period of time, when we render that enough times, explain that enough times, now behavior chain is like, yeah, that makes sense now. So even just kind of these behaviors, the more that we can do, for example, in the shift left, which is one of the reasons, by the way, why we bought Sacrox a year ago, right? For declarative security, container security. So threat detection, network segmentation, watching intrusions, malicious behavior, is something that now we can essentially make native into development itself. All right, escape key, talk futures a little bit. So I went downstairs to the expert, asked the experts, and there was this awesome demo, I don't know if you've seen it of, it's like a design thinking booth, with what happened, how you build an application. I think they were using the WHO, one of their apps during COVID. And it shows the granularity of the stack and the development pipeline and all the steps that have to take place. And it strikes me of something we've talked about. So you've got this application development stack, if you will. And the database is there to support that. And then over here you've got this analytics stack, and it's separate. And we always talk about injecting more AI into apps, more data into apps, but there's separate stacks. Do you see a day where those two stacks can come together? And if not, how do we inject more data and AI into apps? What are your thoughts on that? So that's another area we've talked about, Dave, in the past, right? So we definitely agree with that, right? And what final shape it takes, I think we've got some ideas around that. What we started doing is starting to pick out specific areas where we can start saying, let's go and see what kind of usage we get from customers around it. So for example, we have OpenShift Data Science, which is basically a way for us to talk about ML apps. And how can we have a platform that allows for different models that you can use, we can test and train data, different frameworks that you can then deploy in an environment of your choice, right? And we run that for you and assist you in making sure that you're able to take the next steps you want with your machine learning algorithms. There's work that we've introduced at Summit around database as a service. So essentially our cloud service that allows for DBAS an easy way for customers to access either MongoDB or Cockroach in a cloud native fashion. And all of these things that we're sort of experimenting with is to be able to say, look, how do we sort of bring the worlds closer together, right? Off database, off data, off analytics with a core platform and a core stack. Because again, right, this will become part of one continuum that we've got to work with. It's not, I like your continuum. That's I think really instructive. It's not a technical barrier is what I'm hearing. It's maybe organizational mindset. I can, I should be able to insert a column into my application development pipeline and insert the data, I mean Kafka TensorFlow in there. There's no technical reason I can't do that. It's just we've created these sort of separate stovepipe organizations. 100% right, right? So they're different teams, right? You've got the platform team or the ops team and your separate dev team, the separate data team, the separate storage team and each of them will work, you know, slightly differently independently, right? So the question then is, I mean, that's sort of how DevOps came along. Then you're like, oh, wait a minute, don't forget security and I'm going to dev sec ops, right? So the more of that that we can kind of bring together, I think the more conversions that we'll see. Yeah, when I think about the in vehicle OS, I see that that is a great use case for real time AI inferencing, streaming data. And I want to ask you about that real quickly because at the very just before the conference began we got an announcement about GM, it seems like this came together very quickly. Why is it so important for Red Hat? This is a whole new category of application that you're going to be working on. Yeah, so we've been working with GM not publicly for a while now. And it was very clear that, look, GM believes this is the future, right? You know, electric vehicles into autonomous driving. And we're very keen to say we believe there are a lot of attributes that we've got in RHEL that we can bring to bear in a different form factor to assist with the different needs that exist in this industry. So one, it's interesting for us because we believe that's a use case that we can add value to. But it's also the future of automotive, right? So the opportunity to be able to say, look, we can get open source technology, we can collaborate out with the community to fundamentally help transform that industry towards where it wants to go. You know, that's just the passion that we have that, you know, is what wakes us up every morning. You're opening it then. Yeah. Ashash Badani, thank you for coming on theCUBE. Really appreciate your time and your insights and have a great rest of the event. We'll do. Thank you for having me. MetaCloud. It's a thing. MetaCloud. It's a thing, right? It's kind of there. We're going to see a merge over the next decade. All right, you're watching theCUBE's coverage of Red Hat Summit 2022 from Boston. Keep it right there, but right back.