 What Rafe is trying to do with all these capabilities is can we hide the complexity? Every team does not need to deal with this level of complexity. In a typical organization, there'll probably be a handful of people who understand Kubernetes at this level, right? And frankly, they don't need to go and make everyone Kubernetes experts. They look at a classic application team. All they want is to run their containers and it happens to be on Kubernetes as an orchestrator. Today we have with us once again Mohan Atreya, SVP of products and solutions at Rafe Systems. Mohan, it's great to have you on the show. Well, thank you Swapnil. Great to talk to you again. It's always a good idea to refresh memories of viewers that what is Rafe all about in today's modern clouds and the Kubernetes centric world. So Rafe is, in essence, a Kubernetes operations platform and the genesis of the platform was based on what we saw on the market about five years ago with the rise of Kubernetes and then the use of Kubernetes across a company where you have multiple teams trying to use Kubernetes in a shared manner. And that resulted in a lot of complexity, a lot of workflows, a lot of interaction between these teams. And the platform is aimed at making sure that all these are the work like clockwork and everything works seamlessly in an organization with God rails and policies and controls in place. And now let's talk about the new announcement that your folks are making this week. One of the other learnings we had from, especially some of our larger customers, is it was not about one cluster. It was an average enterprise has tens or 50 or 100 clusters, maybe even more. And one of the challenges that these organizations were running into was there was a complicated process to keep these clusters updated and current, right? And maybe a good way to think about this is maybe let's take a particular example. Kubernetes is a multi-tenant environment typically which means you'd have like four or five application teams. Let's take an example. Let's say there are five applications on a cluster, different application teams. And let's hypothetically assume they're running on Amazon EKS and there are many new versions that keep coming and you have to stay current on the latest version. So just to move from one version to another version of Kubernetes, the operations team of the platform team now has to work with these five application teams, make sure that their applications won't kind of fall apart just by upgrading clusters. So this was turned out to be a complicated workflow for these organizations and the burden just to upgrade a single Kubernetes cluster at scale turned out to be very complicated and they're just spending an inordinate amount of time and money and people trying to coordinate this internally. So what everyone was kind of hinting at is we would like to come up with a plan and find a way to implement this plan in an organization at scale, right? Some people may translate this to tooling for automation but essentially what we're announcing is a way by which an organization can create a pretty sophisticated plan, highly customizable. And Swapnil, if the system allows it, maybe I'll even show you a couple of slides and demos just to kind of make the case. And how do you go from there with a developer plan and then run this not just on one cluster but across 100 clusters? Which means now you have tooling by which you can have your 100 clusters current and up to date. It could be 1000 clusters, it doesn't matter. And there's interactions possible between the various application teams and workflows. So that's in a sense what the market told us. We kind of saw this in the trenches with our customers that they struggled through it and that's what resulted in us saying, hey, we got to solve this problem or the adoption of Kubernetes might stall, right? At least for our customers. If you look at the whole adoption of Kubernetes, we started talking in production like years ago, it's everything, folks are moving production and that's when they hit some of the roadblocks, some of the challenges in day two, day three, day four, day five operations. Talk a bit about this announcement, how does it reflect on the whole adoption of Kubernetes itself and how REFE is kind of helping customers in their journey while REFE itself is its own journey, right? With the new products, new services. A lot of organizations, once they experience a platform, I mean, the words we see them use is the other easy button, right? Now, if you crack open your iPhone, you probably have like a gazillion components and if you crack open iOS, there's like a million protocols inside. Like when you call your mom using FaceTime, it's probably based on 20 protocols underneath, but those are invisible to you as a user. So in a similar manner, what REFE is trying to do with all these capabilities is, can we hide the complexity? Every team does not need to deal with this level of complexity. In a typical organization, there'll probably be a handful of people who understand Kubernetes at this level, right? And frankly, they don't need to go and make everyone Kubernetes experts. They look at a classic application team. All they want is to run the containers and it happens to be on Kubernetes as an orchestrator. They actually don't care whether it's version one or version 1.3 or 1.26 or something like that, right? So, but at the same time, from a security perspective, compliance perspective, these platform teams need to make sure that they're running on the latest supported versions. So this is kind of an example of the problem in the market where you have a fantastic system like Kubernetes and orchestrator that only experts know how to understand and you can't have everyone as experts if you want broader option. So how do you make this easy to consume? And that's kind of what we are focused on, right? We give the superpowers to everyone in an organization rather than just one person who, you know, is a Kubernetes expert. And the fleet ops is an example of something we are attempting to bring to the market where the expert can encapsulate all these best practices and create a plan and just kind of run this repeatedly again and again and again, right? At scale. We hear the word after this, hey, things are complicated. The landscape is massive, landscape is huge. So, and this complexity is not going to go away. What we have to do is to help customers deal with this complexity. So also talk about how RAFIT, once again, this solution is a good example, is trying to help lower the barrier of entry. As you said, you know, otherwise the adoption of Kubernetes will slow down. That's what we don't want. So how you're helping folks where we don't have to make a lot of compromises with the Kubernetes flexibility but make things easier for them so they can consume things. So maybe let me take an example here to elaborate, right? So this maybe sometimes is better to talk through examples. So, and we saw this a few months back with a suite of our customers who are on EKS and are moving from a version that was end-of-life to something that was newer, right? So actually there are two examples. One is, as you know, in Kubernetes is something called pod security policy. It was removed from a newer version of Kubernetes, right? So organizations are running those and if you had blindly upgraded Kubernetes, right? You would basically have, you know, your applications falling apart at that point in time, right? You would have effectively a, you know, say one issue with your production application is going down. How would the application team even know? They won't even care, right? So that's an example where people had to like spend an inordinate amount of time building tooling to go check for these. Another great example was, this was again with the cloud distributions of Kubernetes, the storage driver, it moved from what is called as an in tree, which is inside Kubernetes and they externalized it as a CSI, which meant if you had applications that were using storage, you have to move from assuming that you have access to storage to you have to first deploy a CSI first and make sure you have continued access to storage. That's not all. Another great example was Docker was, the Docker shim was removed from Kubernetes and it was transitioned to continuity. So if you had applications that were pinning to Docker, I mean, you're toast. The applications will simply die and stop working when you upgraded Kubernetes. I mean, these three things as an example happened this year, earlier this year. People are still dealing with it, right? And they were doing this across 100 clusters. They had to deal with like 20 teams, 30 teams saying, do you use Docker? Can you please move to container D? Do you use storage? Can I please move you to this thing? Can I like, imagine the operations team, a platform team having to collaborate with all these applications team who frankly don't care, right? About these lower components. So that's kind of an example of how, we got these teams to say, look, encapsulate these. Let the developers run this in a self-service manner. All they need to know is, hey, is my application gonna die if I go from version X to version Y? They don't know anything more. And then not only that, they wanna know, hey, it's broken. What do I have to do if I have to change and not break? Because the developers want crystal clear guidance, right? Tell me what do I have to do? Don't ask me to do a PhD in this particular area. If I had to move from Docker D to container D, well, I still had to spend time figuring out what the implications are to my app. And it's not like a simple change, right? So this kind of was essentially the collaboration between an operations team and an app developer where they were trying to push some of these diagnostics and figuring out answers in a self-service manner to the developers and then giving them crystal clear guidance on what to do next. And I mean, this is a problem we all run to, right? Like if you think about a classic developer team, they write a Confluence page and say, you figure out everything from that. It never works, right? So you need some kind of automation like this where you can do this at scale if you are the lone guy or girl supporting 100 application teams, right? So self-service, we kind of believe in self-service, empower the developer, tell them everything they need to do, and then short-circuit the whole process where you can still keep moving forward. That's what we, I mean, the philosophy behind a lot of our features in the platform are kind of driven through these things, right? Essentially comes down to two things. Well, we can do automation, but anyone can do automation. They get a million ways into automation, right? But you wanna do this in ideally a self-service manner with the right guardrails around it or controls around it so people don't shoot themselves in the foot, right? So it's a combination of this trifecta is what is unique in our platform. Automation with self-service and the right controls and guardrails in place. That combination is important in our opinion. Now, can you also talk about the significance or importance of Red Hat OpenShift certification? OpenShift from Red Hat is a pretty widely adopted and used distribution and platform. Pretty much most large enterprises use OpenShift, especially in their on-prem data centers, and it's a flagship platform on which they add and deploy and operate their machine critical applications. Now, we've been working with the Red Hat team for a while now, and one of the things we've done is make sure that an enterprise using OpenShift has the guarantee that they can leverage all the capabilities of the RAFI platform services on top of OpenShift with no issues. Essentially, we've been working behind the scenes with Red Hat to make sure we do the right things and comply with all the requirements that the OpenShift team requires, not just a point in time, but an ongoing basis. So, which means organizations get peace of mind. They can leverage these two platforms together, essentially to accelerate their needs. As an example, I'll just take an example. One of the largest toy manufacturers in the world that actually uses RAFI with OpenShift. I mean, they actually use OpenShift not only the data centers, but also in their factories in China and other places in the world, in a global deployment. So they have many clusters, many OpenShift clusters. One of the problems that developers are running into is, hey, how can I view this fleet? Is that a single pane of glass by which I can see everything across these OpenShift clusters? Even though these clusters might be running in China, no problems, right? RAFI provides you the secure authentication and the access, zero trust access that I can interact with these. The RBAC automation is done through RAFI. They make sure that these clusters are standardized using the RAFI blueprints. So effectively all the critical capabilities of the RAFI platform doing this across a fleet of OpenShift clusters. And not only that, this organization also has a cloud presence. So they also have to manage EKS and Azure AKS clusters. So effectively we become the single pane of glass, single control plane, helping them orchestrate and manage their automation and controls in a self-service manner, whether it's OpenShift or Azure or Amazon or Google or the VMware environments, right? So that's kind of what their needs were because OpenShift was one pocket, one critical pocket of infrastructure for them. But they also had others and RAFI is overly on top making sure everything is consistent. Now, we had to make sure that we did everything needed on our side to make sure that everything works perfectly on OpenShift. That's the work we had to do. As you were already talking about that, of course, Red Hat ecosystem is massive. And you folks are once again making things easier. So first of all, talk about, what does your availability in the Red Hat ecosystem mean for a current and new customer? At the same time, what does the process look like when folks like RAFI, they go into a Red Hat market place because some things are very stringent there. So talk a bit about these two things. Absolutely, absolutely. As I briefly mentioned before, there are a battery of tests and certification checks that we need to comply with and get certified. And this is not a point in time, it's an ongoing test, right? And this allows us to also make sure that the RAFI is available in the marketplace, a Red Hat OpenShift marketplace, which means if I'm a new organization which is looking to use something like RAFI, they can discover RAFI in the OpenShift marketplace. Not only that, the click of a button, they can actually deploy and evaluate RAFI on top of their OpenShift environment. It makes it easy for them to test it, evaluate it. Not only that, they get peace of mind that the vendor, in this case RAFI, has a pretty stringent list of requirements that OpenShift that requires us to follow that they'd be employed, there's not just at a point of time but an ongoing basis. So in summary, it makes it easy for them to find RAFI, makes it easy for them to kind of trust that we are doing the right things but to make sure that we are not gonna compromise their Red Hat OpenShift environment. And on an ongoing basis, they can engage, make sure that these two vendors, Red Hat and RAFI, they have like a back-to-back kind of a support model where they are not left holding the bag. We have to make sure that our product works seamlessly on Red Hat and that's an obligation that we make as part of this partnership and relationship. So use peace of mind for the customer that everything will just work seamlessly. Mohan, thank you so much for taking time out today and of course talk about this new feature and also a lot of insights there for the whole Kubernetes Red Hat ecosystem. Thanks for those. And as is all, I would love to have you back on the show. Thank you. Thank you, Swapnip. Thank you for the opportunity. Appreciate it.