 That's great. That's a much better conversation. All right, everybody, and welcome to another OpenShift Commons briefing. Happy Monday. We're here with Hasib Boudani, who is the CEO of RAFE. And they've had some an interesting journey with OpenShift and Kubernetes, and they're going to share that with us and how they've done some really cool stuff to make Kubernetes rock on multiple platforms at the same time. So I'm going to let Hasib introduce himself, and we're going to plow through some a conversation about Kubernetes operations platform for all of us to join in. So, if you have questions, ask them in the chat and we'll get to them after the presentation portion. All right, Hasib, take it away. Yeah, then nice to talk to you again. Thank you for having me on the show. So, as you said, RAFE is is a start up in the what? We're beginning to call the Kubernetes operations space. And I think the big trend that's happening in the market that's taking place in the market that is kind of adding fuel to RAFE is that many enterprises have been using OpenShift for years now as they look to public cloud, which as they start kind of becoming cloud first in many cases, you know, one of the things they ask is, well, I have OpenShift, I love it. It's great. It works here in my data center in Amazon. I could be doing Rosar, I could be doing, you know, kind of just OpenShift as a service or deploy myself. But in, you know, let's say you made an acquisition and the company that you acquired is using E-GaS in Amazon or E-GaS in Azure. Or who knows what else they're running. Maybe they're using QtSpread. They built a cluster and now have multiple distributions that I'm running in multiple locations. What is my next step? And I think that is driving multiple companies to really think about what comes after provisioning problems have been solved by the industry. The OpenShift is an incredibly good job, solving for the cluster management, you know, kind of cluster lifecycle as it were for OpenShift clusters. But now when people have multiple distributions and run in multiple regions, what's happening is every enterprise is bringing a platform team. And that platform team on top of OpenShift is now writing a lot of software to maintain these environments, give developers access and have the right level of automation, visibility, security and governance that, well, clearly is needed. And, you know, Ted very recently was a gap. And that's the gap, pun intended, that we saw in the market, where once people make a decision to go with, let's say, OpenShift on-prem or in the cloud and then maybe E-GaS in Amazon, what happens next? And to that end, we're selling what we call a Kubernetes operations platform. The word platform is going to be used probably 500 times in the sky today. That makes it easy for, you know, SREs and operations teams and DevOps organizations to get to a finish line faster. I mean, fundamentally, very, very high level. That's our goal. We sell, I feel like I'm going to jump in, we sell a platform that, you know, in my mind, kind of does three things. It helps with the lifecycle of the clusters themselves. You know, so blueprinting, I'll talk about these things momentarily. It helps with the provisioning and lifecycle management of the managed services that exists out there. So we have, in my opinion, the deepest integration with E-GaS in Amazon and then E-GaS in Azure as two examples. But now that you have your clusters up and running, what is the point of Kubernetes but to deploy applications? So the platform also brings a number of capabilities around automation and visibility and security for development organizations to deploy their applications. So multi-staged pipelines, easy integrations with GitHub.com and GitHub, et cetera, that I will also show today that make it just super easy. Wherever you are, don't need VPNs. Don't need jump posts. You can deploy applications on-prem or in the cloud. And then finally, being a platform directed towards enterprises without the right level of governance, visibility, et cetera, being available to platform teams, it's really hard for them to do their jobs. And here's simple questions that come up. What do you know your clusters look the same? There's a new CVE for Prometheus. How do I know that all of my clusters are running the right version of Prometheus? These are simple questions. And they're being answered today, but then to answer them, platform teams are building automation tooling that is now itself something that they have to manage on and over and over again. So these are the kinds of problems we solve. We sell a SaaS product. We call it a SaaS first platform because a majority of our customers consume the platform as SaaS. But we have some larger organizations using the platform that, for whatever reason, are unable to consume our platform as a SaaS offering because of compliance reasons or regulatory requirements. So for those customers, Dan, we also provide our entire package, our entire solution as a package offering. So we run our application on Kubernetes in the cloud. So we are pretty, our ops team knows Kubernetes really, really well because not only do they sell software to help people run it, we run Kubernetes at pretty significant scale across multiple regions just so we can deliver our application to our customers. So our application is also a bunch of hemp charts. So we're truly eating a dog food. So it's a SaaS first platform. You can consume it as a SaaS product. Most customers do. But if you have certain requirements, you can consume it as an air gap installed and truly air gap. I mean, no connection to the internet and we provide a level of enterprise support that our customers have clearly come to enjoy. So very, very high level Dan. That's the high level story. Right after this, I was going to jump into two more slides, just a little bit more detail, but before I do any questions for me, ma'am. Well, it's interesting the, the mix of, you know, we talk about hybrid clouds and, you know, on-prem and AWS and at the edge and that. How much of that are you really seeing in your customer base? Is it more a large number of folks who are deploying on AWS and Azure? Or is there really a mix of on-prem and it's an interesting conundrum is really finding out what is truly hybrid deployments that need this. Is that what you're seeing or is this mostly, you know, multiple clouds? Look, multiple clouds is not as common. I see lots, lots on-prem enterprises have been on-prem and, you know, then typically the, you know, the leadership, the IT leadership will make a decision. Look, we're going to go cloud first. They're going to pick a cloud as a primary. Everybody we talk to, they make sure. I mean, the following question I get a lot. Hey, so you said AWS, but do you support Azure? Yes, we do. You want to try both? Yeah, we'll get there soon or vice versa. By the way, we're starting with Azure. We get to AWS or we're starting with Azure. We'll get to GCP. I mean, learning each cloud is also not easy, but the pragmatic view is that, you know, large enterprises, they go acquire a new business or a new team that starts building an application and they go, well, we built this in GCP because we want to use Bigtable. I don't know, BigQuery or whatever, and they just end up there. It's pretty hard for enterprises to set rules that these are the boundaries. You cannot go outside the boundaries, but at the same time, I don't see enterprises start there. You know what I mean? I mean, they don't, nobody wants to kind of take on this much pain down they want. Come on, I got to get something up and running first, yeah? So on-prem and some cloud, and maybe at some point down the road it'll be a second cloud, but the idea of hybrid is very much, it just, it's there now. Everybody's in a hybrid situation. You know, in many cases, we've seen customers would say, look, I'm going to work with you. I already have OpenShift. It's been working for four years, five years. Now I'm going to AWS or Azure and for that new cloud, I don't want to rebuild my tooling all over again because I already have it for OpenShift. So, hey, why don't you be my EKS partner or EKS partner or whatnot? So people start there, but then once they can experience a platform, it's just logical for them to ask the question, why do I have two tool sets? Yeah, doesn't make any sense. So is there a way for me to combine them? And of course, we have a very nice story with OpenShift on that front. Yeah, no, and I also, I come from an IT audit background from ages ago. So I love the part of your story, and I know you will demo it for us around the governance across all of these multiple things. Because I came into platform as a service in the early days with this headset on of IT and audit, but also this concern about shadow IT. You know, it's like you get a credit card. Oh, I can go deploy on back in the day, Heroku or wherever it was. And this is almost like the Uber single pane of class for, I don't want to say the word Band-Aid, but it's like being able to bring all of those things in because before it was just I'm deploying an app here, here and here. Now it's like I'm deploying an entire Kubernetes and an entirely different Kubernetes distribution on this cloud and another one on this cloud and another one on that. And so you've seen this kind of Uber, maybe shadow IT and shadow IT is probably a bad word these days, but there's just so much going on in different silos in these huge enterprises that you can't really limit them to one cloud. So I really love this solution. So I'm going to let you drive on and tell us a little bit more. Yeah, I'm really excited about this. This is one of the things that this is one of the new things that's that's coming out that I really think is going to move the Kubernetes communities forward. You said something that I want to, I want to just comment on if I may. I think you mentioned shadow IT. So I think why do people do this? Why do people, you know, kind of do things around IT as it were shadow being around them unknown to them? I think because they want to move fast enough and sometimes we don't have the tooling to let them move fast enough. So there are development teams out there who never want to think about Kubernetes and they say, Hey, just, I don't care. My Jenkins or what? Tecton and you're in Tecton in IBM's case is going to bring me my containers. And then after that, I don't really care. What is Kubernetes? I don't care. Go figure it out. And on the other end of the spectrum, there are very sophisticated organizations that say, look, I know everything. There's no balance. I can write tariff on my sleep. Just, just let me be, right? Let me do my job. And it's always going to be a challenge for, you know, central teams like platform teams or architecture reports, et cetera, to come up with one kind of set of tools that work for everybody. All right. So I think that's that to me is, I gotta tell you, this is my biggest lesson learned in this industry. There are all kinds of customers at different points of majority in their journey, and we gotta be able to meet them wherever they are. If we try and come in to somebody, let's say a customer's been doing this for three years, they have automation and we come and say, well, you got to throw everything, you got to start from scratch. Yeah, that's not going to happen. But at the same time, there are super early customers who do want everything kind of packaged and available to them. So we as runners have to be really careful and appreciative for what our customers have built already and meet them wherever they are. I think that, I mean, that's a long story for another session, but that is my biggest lesson learned in this market. Yeah. And I think that's the best business model is to meet them where they are, you know, not trying to push them into any silo or any specific, but yeah, drive on. Yeah, no, we keep talking, I'm sure. And, you know, to that end, and you and I have spoken before about this multi distro phenomenon. People pick what they pick. It is what it is. And it's okay. Distro is sort of, once you deploy them, they all look the same. In fact, when we, when I, when I get into the product, I'll cut it into an EKS cluster and an OpenShift cluster. And as a developer, it's all the same to me and I don't really care. It's OpenShift, EKS, or you predict yourself, I don't care. It's, I can deploy an app to it. That's how it should be. And that's how we see the world. I mean, the way we see the world is this, this platform team that I mentioned, I don't know, 20 times already. They need a number of capabilities. My perspective is that till date, our industry has been sort of stuck in box number one on the left hand side of your multi cluster management service. All we talk about is, but what about the cluster? Can you bring up a cluster? Is it automated? Yeah, but what about all these other things that my internal teams need to actually do their job? And, you know, is there an automation pipeline to deploy infrastructure as well as applications? Well, my clusters now are now all over the place. So what is my methodology doing interact with these clusters? Do I have auditing in place for them? Oh, PSP is going away. So I need OPA. So how does that work now across my clusters? And by the way, some are OpenShift and some are EKS. Some of these are production clusters, so I need to pack them up. I need visibility monitoring across all of them. And so this is what we sell today. So the blue boxes is what graph A sells today on top of distributions that exists out there. It works across all the distros that you see on this list. We've tested them all. We have customers who use them all. And there's more to do over time, right? We're not done yet, right? So there's three or four other capabilities that we over the next couple of years, we can keep releasing to our customer base. But we see us living here above the distros because we think that this is where the most pain is. I think the initial, the first generation problem set for this market, which was let's solve for distribution. Let's just make it super easy to bring up clusters. That's a mature market now. You've solved this already. Now let's work on what is next. And that's where RAFE operates. And yeah, this operations work again, right? This is an operational issue. This is not a management issue. This is not a provisioning issue, right? And many times, you know, we, many of us use the phrase D2 operations, I guess. I wonder if customers understand what that means. I've had this conversation of what is day one again? I don't know. It really means once my clusters are up, what happens next, right? And it includes the everything, not just the clusters. It includes the applications. It includes access and includes, you know, just kind of the life second management of data, et cetera, et cetera. And that's where RAFE seems to fit today. And, you know, we've done a pretty good job of that as you've seen the product and as many of your colleagues have already tried on our platform. Questions on this for me, Matt. Well, I would love to see some of the demo. I can see it in your eyes. I mean, I'm not seeing it. The demo was, when we talked previously, when you showed me the demo, that's what, that's really what's impressive, I think. And so, and I'm always curious. I mean, I think one about the some of the initial confusion that I had was about were you taking a cluster. I think you remember this in the conversation that we had was were you taking a cluster from OpenShift on-prem and helping people move it over to, you know, EKS or, you know, wherever it was. And so I think the demo portion of this, and that's not what was happening. So the demo really cleared that up for me. So I think really, maybe if you can explain a little bit about what the control model is here. Yeah, absolutely. And I'll use this slide to set it up and I'll jump into the UI. And actually, I'll log in as an organizational admin and then as a user so you can kind of see what different people see in the platform. But this is a core principle of the platform. This is our unique way to think about the problem. So we run Graphay as a multi-tenant, multi-region science product on the internet. This proxy layer, it runs, we run many, many proxies on the internet. I'll talk about that also. I'll show that in action. Our customers' clusters, OpenShift, EKS, EKS, whatever, they sit behind a firewall, they sit behind a security group in a public cloud. But we're trying to manage them, right? So how do we interact with them? So in each of these clusters, we run, oops, not thank you yet, an operator that is in such a very simple process. I'll show that also, that connects out to the controller. It finds one of three IPs on the internet or four, four to three. So our customers only need to open a single port, only outbound, nothing in. That's a critical requirement for every enterprise. They're not going to open up ports for some guy on the internet. And even for outbound, if you tell them, hey, so you have to whitelist 30 different ports or 20 different host names, that's not going to happen. So we thought a lot about this and we made it easy for our customers to interact with RAFE. It's just minutes we'll have customers set up with our platform. But with this channel, now your end users on the internet or your systems on the internet or whatever, who's starting to interact with the cluster, they think they're talking to a cluster directly. Because really we should not do that. We should not be exposing our Kubernetes API endpoints to the internet. If we need, traditionally people use jump hosts and VPNs and whatnot to connect with them. I think the challenge with that is if you're running everything in a single data center or a single VPC, I guess it's okay. But then if you're running multiple VPCs in many customer environments, every dev team gets their own VPC. How many jump hosts are you going to have now? What is your overlay network to just solve that problem? We don't need this anymore because unbeknownst to the user of the system, they think they're talking to this API endpoint, they're actually talking to a proxy on the internet. I'll show that in the Qube config that I'll download to interact with the system. And that makes life very easy because, well, do you think you're just talking to the cluster or not? And now this outbound channel is sort of stitched with this session coming this way. And whether you're provisioning clusters, deploying applications, collecting health, Qube got into the cluster, whatever it is. It happens in the zero risk fashion. All access is authenticated, authorized, audited, and then you interact with the cluster. This is a core principle of the platform. Every cluster in the world to our system is cryptographically unique. Multiple customers, each with who knows how many tens or hundreds of clusters, they all interact with the same service. But now this is a globally available service and you can manage anything from anywhere. This design choice is what allows Rafi to do what we do. Without this, we could not have solved the problems we've solved. And, you know, every company needs something that is like, what is that one thing that is different? This is different about Rafi. So the zero trust concept that we built into the platform makes security teams actually, you know, it gives them the warm fuzzies because they know what zero trust security is. The fact that they can tie this back to octa or ping or whatnot. And then even across clouds, whether it's OpenShift, EKS, EKS, their enterprise identity is now the driver for access. Right, so now we have already been bringing security people. This is what makes my little auditor heart happy. You know, it's that piece of the puzzle and the ability to backtrack and see how things are being used and making the risk management around this. And all the folks that are in the compliance offices happy and keeping them happy. This is something that it's interesting that it's taken this long in the Kubernetes space for this to arise. To me that this was always a conundrum waiting to be solved. So keep driving on. I know I keep interrupting, but this is great. This is great. If I was just talking, if I hear myself all day long, it's not that great. So I'll get the prezo and I'll jump to let's go here. This window is getting in the bottom. All right, so I'll start with an enterprise administrator. So I'm logged in as myself, Haseeb, and I want to make a point here. So this enterprise, typically enterprise will have multiple teams. No enterprise is a flat organization. So you have multiple dev teams, multiple production environments, multiple different kinds of, you know, did this customer seem to be running on prem as well as in Amazon. Somebody needs to be able to kind of figure out what the hell's going on across all of my work. And then within all of my teams. So we have this concept of projects open open shift as this concept of projects. So we've extended the idea of projects as one as one kind of example of how we use identity to manage infrastructure by saying that a project can be an arbitrary isolation boundary where a team can have more than one cluster potentially across multiple sites. Because if you think about it, our team could have an open shift cluster on prem and any case cluster in the cloud and this is their domain now. Yeah, so I'm going to show that by the login is another guy who's going to have two clusters, one in the data center, one in the cloud in one environment. So this customer is running, you know, some teams got one cluster, some teams got two clusters. These teams are sharing clusters. I don't know why there are four, but there are four clusters on and on and on. And I'm going to use this commons demo example to show what's happening. So my commons demo customer or internal customer, they're running two clusters. But at the tip of my fingers, across two environments, I have a whole lot of visibility immediately. Something bad happened last night. My buddy, who's been helping me set up this demo, he's been logging in doing a bunch of fun things lately, clearly late at night. That was probably well past midnight. And then from a kind of governance perspective, I can see that, okay, so everybody in this organization is running the same blueprint and I'll talk about that also. So I start here, usually I do a demo for a customer. I start here because what I'm trying to tell them is, look, I mean, you don't have a single cluster, you have multiple environments. Do you have a way to actually visualize all of these things in a single pane of glass? And yeah, of course you can see a lot of data. By the way, each of these you can dig deeper and deeper and deeper. So I can go to, let's go actually let's do the OpenShift cluster. So within the OpenShift cluster from a single pane of glass, I can see what's happening. I can see that it's running three master's degree workers. I can see inside the cluster. I can see the pause running inside the cluster. All this stuff is available to me from a single pane of glass and I know my application is called Workbooks. So immediately for an infrastructure team as well as for a development organization, I have an organizational view so I can see all my teams. Each team has their own view, which I showed. Then within, I can see a cluster. I can see resources inside a cluster. If I don't know anything about Kubernetes and I'm a developer and we should have a developer view for this also, I can show, I can exact into the application so that all I need by the way is I just need to look at my app because my app is breaking in production, but I don't know anything about Kubernetes. So we can give you an exact into the into your production environment. I don't need to worry about jump posts. I don't need to worry about a VPN. I don't need to worry about to contact management. It just works. I can do this or of course I can interact. If I know something about Kubernetes, it will drop me into my namespace also and I can do whatever I need to do. By the way, the best thing about this interaction, I'll show this from the, from the command line also, but I actually now use our UI kind of this browser based to put it more because it does auto completes. I, I forget the command. I don't know if I'm getting older just too much going on, but I forget the commands. There are way too many. Sometimes it's like, last night I was like, is it so it's set context singular, but then it's get context plural, and that could have meant to explain to me why one is a plural. Oh, my brain is exploding. And by the way, once I did this, so Haseeb logged in last night, notice this. So the service account, the agent service account, when I log in as a different user, you're going to see that service account is automatically created on the fly. So I'm going to focus a lot more on the governance pieces, the visibility of all this information. The fact that we can provision clusters, I'll show that too, but we can all do that, right? That's just that's easy. So this is the organizational view. By the way, the same environment in the same project called commons demo. I have an EKS cluster. And then I have an open shift cluster. Yeah, now I'm going to log in as a different users. I opened up an incognito browser and I'm going to log in as open shift. So this user is going to get authenticated using octa, most enterprises octa. But once this user comes back into the environment, he's got only one project. And this project dashboard here in the previous screen, if you recall, as an organizational admin, I can switch between all my teams. Because I'm an old admin, right? I can see everything. But when the team member logs in, they cannot, they can only see their environment. So based on their identity, we log them to some environment, which in this environment, they only have two clusters, same clusters, but they can only see two. There's another 20 some clusters in the in the enterprise. Yeah, now my business, my business is these two environments on these two clusters, I can do whatever I want. I can, by the way, before we end the call, I'll just click this button. We have a, by the way, it's just very nice, very elegant single button upgrade option. I won't do it right now because I want to deploy an app to it, but I'll do it before we hang the phone. So now I have clusters. I can, you know, I want to show that other one I was doing before to EKS. So I'm going to go to this EKS cluster and I'll do the same thing that I before I could get cost manager days can take four seconds in. Well, much faster than four seconds in this time it created a service account for this username open shift in seconds ago. So the identity of this user open shift at C2.io did not exist on this EKS cluster till 15 seconds ago. It was just created in on the mat. If you have many users in there in your company in your business, and if you are letting users share to Batman privileges. You're making a mistake. Something bad will happen. And then somebody was going to say, well, who did what? Yeah, you're not going to know. You're not going to be able to find out. You're not going to be able to figure out who did what. I mean, it's going to be like a mining operation. Yep. I say that it's forever. You know, right? Match this log to that log line and then three weeks later can I was, no, you know, everything. In fact, all of this work that I'm doing right now, I'll go back to the other user. I'll go home system. Yeah, we see everything. So centrally, this user open shift that C2.io has been doing cubicle access. In fact, in the cubicle logs are available to every API call made across the entire enterprise across any cluster anywhere. Even the QB API calls are now audited and you can show them into your SIM of choice. Sorry, I forget which one is IBM's. I was going to say Splunk. Sorry. Curator. You can shove into Curator. There you go. Right. But, but now, you know, I keep focusing on these things because to me, these are important things. Look, all of us on the automation side on the demo side, we figure out how to deploy apps and clusters and whatnot. But this is where pain comes. Somebody's going to ask the question, who did what do we have service account management? Do the service accounts retire as an end when I need to? Graph is now just by virtue of the fact that you're using graph a identity management, all that stuff is taken care of. You don't need to know now manage another tool called decks and go to every cluster. It just happens centrally, but doing all this stuff centrally across all of your environments is the power and not maybe looking at identity management whatnot, but now I'll kind of also talk about just just the system provisioning and everything, and you'll see that centrally, you can now do a whole lot more. So because we're looking at clusters. Oh, by the way, I gotta call this up before I forget. So my tools, I can actually download my to conflict as a user. This user is allowed access to two clusters because this user has only access to two clusters. If the admin downloads this file, it's going to have 50 clusters in it. So now you can switch context based on your identity, but notice the servant point to remember from that zero trust concept slide we were talking about. You never hit the cluster directly. You hit a proxy on the internet to Rafa is your kind of, you know, Kubernetes proxy. And this fundamentally is a key design choice. I mentioned this earlier. I want to kind of, you know, can underscore this again. The team here has effectively implemented a proxy based approach to managing Kubernetes environments. But that is a that is a unique concept. Doesn't matter what happens in your environment. Everything is happening to a proxy endpoint, which is why we can audit everything. You do everything at a, at an identity based level. So, I'm going to pause you there for question. I'm going to ask a question about with all this proxy stuff. Is there any issue around data gravity or lag time? Or any, have you seen any in the. Oh, let's, let's, let's try it. Okay. Let's set context. What am I running openshift demo? Yeah, that's fine. Good enough. So. So it's going to take 15, 20 seconds to set up the connection because my open shift cluster is running really hot. But then it's going to start. I mean, you can see that the first time around, it takes a little bit of time because it's creating an identity for my user on open shift. It's going to set up the accounts. It's going to kind of do what it needs to do. And then when it comes back again, it can be fast, fast, fast. But while this is happening, I'm going to go to the UI and show the same thing here. So you'll see about lag. Every single time it's just going to work. In fact, what we have found is because of the methodology we built. It's a WebSocket based implementation. It actually is faster than your traditional kubectl commands. So here it's back again. So this user kubectl, get nodes, kubectl config. This works, right. So from my command line, it just works. It doesn't matter where this cluster is. So this, so a number of us here. Dan used to work at a company called Akamai. It's a CDN company. So the proxy stuff is in their DNA now, right? Everything. These guys probably have proxies running in front of their posters at home, their refrigerator. They think proxies all day long. So this stuff, I mean, they would take this very seriously, very personally, if this was not fast. But this is just, this is a rock solid system. We have customers running clusters in like, at this point, like literally all over the place. I mean, you know, if I may call it out, it just works. Everything just, it just works. Yeah, no latency issues guaranteed. Hey, thank you for asking because every single customer asked me that question. Hey, so this is a proxy. Is it going to be slow? Nope. It's just one, by the way, I'm going to show these demos via the UI because you guys are. It's a UI, but for folks watching or who may watch later. So, so there's automation for everything. So, as an example, you can provision the guest clusters. We built a methodology for example. So we have a CLI called our CTL. There has to be a CTL for everything these days. So we have an RCTL or a CTL so you can use the office capabilities and tie them back into tecton or something. So you can provision things programmatically. We have a Terraform provider that customers can use and it's available. You can just consume it from the tariff on site. Hashi site. We have an API open API 3.0 compliant API. So you can use all of these methodologies depends on where you are, right? It depends on what automation you prefer. We're not going to take an opinion here. You decide. In fact, we also, by the way, dock site is phenomenal docs. It's open. We have a get ops way to do everything. But you decide what you want to do. So there's a get ops way to deploy apps. We have a, I'll do a demo of the application part of get ops momentarily. There's an API. There's a CLI methodology. We support integrations with a bunch of CI systems out there. Of course there's Terraform, but I'll use the UI to do these demos. I mean showing just a Terraform job. It's just not that interesting to people watching a bit of recording later. So how does this work? So if you're working with OpenShift, what you're going to do is, by the way, this blue jeans thing is always in my way. This is, I wish there was a way to kind of hide these things with the blue jeans and zoom, but that's fine. So OpenShift. So importing an OpenShift cluster is super easy. You pick a blueprint. I'll talk about what that is momentarily. In this case, I'll pick a commons demo blueprint version five. If your data center has like a Z scale or a blue code proxy. So we've thought about that. So you can configure a proxy. And all you got to do really is download this bootstrap YAML, go to your cluster, run this command. Rafa will now be part of your journey. You'll see all of these things turn green. And then at that going forward, you can deploy apps to this cluster. You can access everything was just start working. So it takes about a couple of minutes, but the onboarding of existing OpenShift clusters or even, you know, self provision clusters, it's like a couple of minutes in your time. So it's very, very easy to start working with Rafa. In fact, if you're, if you're going to be doing this with, I'm going to have to move this window again. I want to provision you in your cluster. You can do this in the cloud also. So we support public clouds. So in AWS, we support two options. You can use Amazon's EKS or you can use upstream Kubernetes. Some customers, I sort of, I don't follow this logic, but sometimes, but you know, some customers kind of feel like I don't use EKS. I want to use upstream Kubernetes. I want to do. We support that. So we support a very elegant upstream based deployment where we can do in place upgrades for your clusters, which means you sort of, Rafa wants a master's now for you. My recommendation to customers is use EKS. It's rock solid. It works really well. Amazon's done a really, really good job with the, with the, with the master management that they're doing today. So we built a very elegant methodology. By the way, we do the same for Azure. So EKS upstream, TCP, GKE upstream. We don't want to be in the business of making a decision for our customers. We have opinions, but you decide. I'll share my opinions with you. I think you should use EKS, but if you don't want to use it, it's fine. So we've built a lot of automation for our customers to provision EKS clusters. It's super easy, you know, pick credentials like an I'm policy, pick a region, pick a version, blueprints, same thing again. You know, you want multiple node groups, all of that's anything you can provision or you can do with EKS or EKS for that matter, or you can do with Rafa. But now the idea here is your operations and SRE teams, they don't need to become experts at Kubernetes to manage Kubernetes. I think that's the goal that we should all have. The point of Kubernetes is not in and of itself Kubernetes. The point is to deploy EPs. And I think we're getting distracted by the sexiness of all this new stuff, right? This should be one of those, you know, back in the day we used to say about children, right, seen but not heard. So yeah, just deploy, park, somebody's going to manage it. It's easy. Anybody in the IT organization should be able to do it. I shouldn't need to have, you know, I shouldn't need to worry about like I need to build a skill just to manage simple things. Right? I think our teams are too busy. DevOps teams have thousands of things going on, right? And now they've got to become subject matter experts and 5, 6, 7 people have to be trained on this. It keeps the business behind, right? So we made it super easy. So long as you understand fundamentals of just basic fundamentals of Amazon or commensurably Azure or GKE, you should be able to provision clusters and upgrade them and whatnot. So you can build clusters with RAFE. Not all of our customers use the provisioning capabilities in RAFE. Many of them just build themselves with Terraform. It's fine, no problem, because you can always import a cluster into RAFE. That's kind of number one. You can build clusters. It's easy. Next one, let's look at governance. So we talked about blueprints. So here's how we build this capability. A year and a half ago, I never thought about this problem. So the problem that we see our customers having is each of their clusters is going to have a bunch of things running on it. And then each of those components, in this case, it's a simple example. This customer's running New Relic and they have Splunk and then they have a special Prometheus for OpenJet. Okay, but then each of them has their own lifecycle. Notice that Splunk has already RAFE three times since I set up this environment and New Relic is RAFE twice and the Prometheus operator from OpenJet is actually not that often. So now every time these components RAVE, I need to go to all of my clusters and update all of the stuff. I was going to do this now. So what is the methodology for that? You need to do it. So we built a methodology where you can manage your versions of all of these components and sort of treat them as one thing, one glob, which we call the blueprint. So blueprint is basically, comment them a blueprint while it's been RAFE five times already. So it looks like it started life with New Relic, then somebody added Splunk, then somebody added a new version of Splunk, then somebody added a new version of Splunk, and on and on. This is how life is for enterprises. Things change and then I need to take this blueprint and apply it to one or n clusters. So yeah, I mean you can have anybody in the company just do this, no cube, no nothing. So what's happening with our product now, Dan, is enterprise is kind of these architecture review boards. They come up with policy, right? They're meeting weekly anyway. So they're going to say, look, there's a new version of Splunk. So they're going to create a ticket. They're going to say create a new version of blueprint and RAFE and they're going to assign a ticket to somebody to say, go apply this blueprint to n clusters. And this you can now do in minutes, not weeks. It used to take weeks. You can do it in minutes. There's no testing to do. In fact, as you're deploying, the platform will also tell you all the components that are running on each of your clusters. And if each of these for any of these break, the platform will go around a runbook for you. And it's going to tell you if it's not working. So now at your fingertips across any environment, you have data. What didn't work? Go here is, we don't know the Splunk tooling, right? We didn't write it, but we will get you shorter empty TR guarantee. Because we're going to go figure out what's broken and we're going to give it to you right now. You don't need to go to the cluster. No, we know exactly what to look for. Because we know these tools now. You don't know the internals, but we know from a, from an ops perspective, we know the, what runbook to run to help my customer is a safe 30, 40 minutes right on target. So, if someone wanted to like an enterprise had their own custom blueprints, how easy is it for them? You're creating some of these as as add-ons, but if someone wants to create a new blueprint. Or, you know, not only that, you can pull it from a repo, which could be a hand repo. So we support native integrations with get, get lab hand repos and you can say, I'm going to pull a new one, which is a hand repo. And I'm going to, the version name is going to be one. It lives in a repo car, but now me and the chart name is Diane chart. And that's it. So I can now start pulling things from an external repo. And by the way, this is a, this is a side benefit of a centralized system. I can have my clusters anywhere, but I can simply create registries, repositories, log management system, monitoring systems and all the, like as an example, if you're using ECR as your registry. And your container are being containers are being used on-prem. Please remember that the pull secrets for ECR expire every 12 hours if you're outside of Amazon. So yeah, go fix that now. So we already taken care of that. So you can have ECR being used as a central registry if that's what you want to do across OpenShift as well as EKS clusters. If you end up using JFrog Artifactory, same thing in the, in reverse, but now you can have a central application or kind of definition of registries and repositories and whatnot. And Rafi will basically push the relevant secrets into all the clusters. You don't need to go cluster by cluster and manage all this stuff. Anything that you have to do today, cluster by cluster, we take for you now. You don't have to do this anymore. That's, by the way, in terms of just kind of our, it's a very simple litmus test for us for what we add to our product. If you got to go cluster by cluster and do something. No, that's not good. This is our job for you. We do it for you. It feels like in the early days of platform as a service, this is almost like platform as a service for ops, right? This is like automating. Cubops. Hey, let's coin the phrase Kubernetes operations. Cubops. There you go. I think we've hit something here. Yeah. So it's really. It's kind of exciting. It's like, and I keep saying this is like, it is the, for me, the natural progression where the Kubernetes community and the open shift communities. When we collaborate that we need, you know, that and everybody creates their own variation of this at every enterprise and it's just nice to see someone unifying it all. And a single pane of glass like this. And I'm sure there's things that we all would like to extend. But this is, you know, pretty, pretty exciting stuff for me. I know we get excited about strange things. But this is, this is one of them. We all, we all speak from prior pain, right? When we come on these calls, right? So, you know, that's the, that's, that's where you're liking what you're liking. By the way, even blueprints, you can, you can manage these programmatically some of your prospects for this. So you can kind of declaratively say my blueprint should be X. You can update them on the fly without any downtime on clusters and, you know, nine times out of 10, our customers are tying it to some pipeline to apply these things to, to production clusters. Of course, and I show the UI. There's a, there's a pipeline for everything. But speaking of, I guess, automation and pipelines. So the other thing that sort of mean to be, we see more and more customers use these capabilities. So we support the ability for customers to deploy applications to more than one cluster. So we support multi-cluster deployment. So this is a YAML file. It's like it's just nginx. So from a governance perspective, when I deploy an application, which let's say it's a production application. And by the way, our customers, what they do is they'll create a new project called production and then they'll only give access to the app steam and not to the dev team. That's how they do isolation boundary management in our system. I can set a policy that says no drift allowed, which means if you deploy applications to production via RAFE, if somebody somehow gets cube cut access directly, they cannot change the app in production. So RAFE runs a drift loop. It'll keep checking across all your clusters all the time. It knows the final state of your app. It's going to check all the clusters. It's going to say that app, firstly, you can block it. So it's going to revert back if it sees a drift or it is let you know, or if it's a dev cluster, just let them do whatever the hell they want to do. So you can do that very easily. This little box here, it's a tiny little box in the UI, but this is a lot of work that we have to do to build this capability. So behind the scenes, the drift detection logic was a lot of hard work. But we heard this message loud and clear early days when we started selling the product that if you're going to be a governance platform, these are the things that matter to me as a customer. You've got to figure this out. So once we build this, there was a side effect of this, which was applicable to the edge. So we break this concept called labels for the cluster, not nodes. So Kubernetes has no labels, right? We extend the concept to clusters. So now what you can do is you can say, all of my clusters that are open shift clusters. Right now, I guess in this environment, there are, there are, there are, of course, one cluster named open shift. Or one type open shift. Right now, there's only one. Let's say in the next year, a hundred more clusters show up that are open shift clusters. Rafa will understand that and the application in question will automatically get deployed to the hundred new clusters. You don't need to go down a pipeline again. Why does this matter? Edge. By the edge, let's say you say, this app should run in all of my stores in North America. Create a label named North America and map the application to North America. And as your retail stores pop up, the app will just show up there. So now from a supply chain perspective, you don't have to go through this again and again and again. Sounds simple. It's actually a lot of work for ops teams. Just from a reconciliation perspective, did I do this job? This app. So this is the inventory. These are the five apps that should be running on this cluster. Did that happen? Guarantee it's going to happen. Don't need to worry about it because now you can centrally from the dashboards that I showed before, you can get an inventory of what are the labels and what clusters and be guaranteed that you have some work. Small, small things, they become big problems when you have many clusters. If you have one cluster, I wonder how important the label is. But you got 10 of these, 20 of these, 50 of these. This is a problem and we have enough customers who have that level of scale or well beyond hundreds of clusters now. So small little thing, but now have this application deployed. And by the way, as an aside, we support this concept called templates so you can create a workload and you can say in dev use that wall registry in pre-production use that wall registry so you can create templating in the application, which is a unique concept. That's a whole different session we should talk about another day, but I do want to get to this get off speed because you get nine minutes on the clock here. So we support. Essentially a multi-stage pipeline concept in the product where that you can tie back to a trigger. This confirming that you can still see my screen. I pressed a button blue jeans. Which you can have hooked up to GitHub or whatever good lab. And as and when things change in an actual rebook. Rafa will get a book and it'll deploy applications for you. We're seeing is customers use our platform for cluster management and whatnot or operations. And now they're starting to use the Rafa building pipeline capability to deploy applications because they can now build these pipelines and then share them across the organization. So I'll show an example of this. I can find it. Oh, I'm going to ask for me for two factor authentication. Oh my God. Damn secure. I talked about making my life harder. It's awesome to have the security, but then in a demo. Yeah, I know, but it works. It shows that it's live. So I'm going to change the replica come from three to four. So if I go back to my. There's my portal. There's so totally picked up. And it's going to replica for message and deployed an app with replica for replicas already in my cluster. I go back to my, my here. So get minus and it's working work loads. My server second is not going to be created. It's going to show me that there are four pods running, but now I have a cluster deployed. I was able to tie my application via get ops to 1 or more clusters. And I can make changes and web hooks were triggered and tell Rafa to deploy my application. So from an automation perspective, I got the cluster. I got the app and there 1, 2, 3, 4. And one of them happened 32 seconds ago. And if I do this again, it's going to be faster because remember, every time I do this, my service account is created. And then if I'm not using it, this goes away because I don't need my source account. So Rafa is now taking care of dangling service accounts also. It happened now. It's going to be faster and faster and faster. Every time 52 seconds. So fourth one was added and it just worked. So me sitting here. Look, I used to be pretty technical a long time ago. I'm not this technical anymore, but the fact that I'm able to kind of keep up with this, do this at scale. I can do this across multiple applications now. What the team has done here, they've made it super easy to do all this stuff, right? And that's the point, right? We want, we want modernization to happen at a fast pace. Yeah, Rafa can do that. So everything of course is audited, right? And the last one happened in 28 seconds. So yes, we provide automation. We provide the visibility. Everything is secure because it's identity based. You saw that the service integration happened on the fly. And then there's governance for everything. Nothing is lost in the process. And of course, you know, if you have applications that are using secret stores, so we support how she got board integration, et cetera. So there's a lot here in this platform and we're not going to get to this on this call. But if somebody wants to be up and running with OpenShift plus a cloud managed Kubernetes, you can do this today. Well, I really, you know, the whole thing to me is really impressive. And it's timely for, you know, when we talk to our customers who have multiple deployments and different clouds, you know, all sorts of configurations. And I love the part where, you know, it's a single line of YAML that installs it into an OpenShift cluster that some of the simplicity is what's really nice about this. And it also, you know, not having been hands-on and ops in a long time, it makes ops almost look easy but not. We know it's not. And so we know the complexity that's hidden under these single pane of glass things is pretty impressive. So I'm really looking forward to seeing what comes next and more OpenShift integration into that and more demos to come. So maybe if you could close a little bit and talk a little bit about where you think you're going with this angle and how you see yourselves fitting in the greater Kubernetes ecosystem as well, that might be a good way to close at the end of the hour. Yeah, four minutes ago. I'm keeping an eye. So one thing I didn't, I mean, there's a lot more to this platform that we're not going to get to today. Like it does backup and restore. You don't need to go buy another product for that. You can just use RAFE to kind of create, take backups of your clusters. It's got, as part of the pipelines that I showed you, you can insert a stage called infra provisioning. So you can actually in RAFE bring your OpenShift Terraform jobs and run them from RAFE. So you can have GitHub triggers trigger, essentially a plan, like a Terraform plan and then deploy a Terraform job and then on top of that deploy applications. So if you go click, you can build an entire sandbox for your developer. So we support that out of the box, which is really, really cool because look, all of our customers are Terraform fans, all of them are Terraform fans, and we have Terraform support in the product. We have security policies built in so we support PSP as well as OPA in the platform. So if you, you know, PSP6, you know, kind of getting kind of deprecated in 1.25. So all of us are going to OPA. So we support OPA in the product already. So if customers need the OPA support, you can create kind of central policies for that. Recipe, governance, examples, right? So all these, these are all built into the product, by the way. We provide examples of these. So I show these things because the same mantra that I mentioned before, anything that you got to go cluster by cluster and do, we want to do for you. So security we're taking care of. There's a lot more to do here. I think with the next, you know, some amount of time, we got to think about, you know, things like, you know, dig deeper into runtime security, dig deeper into, you know, maybe cost management and a number of other things that our customers are asking for. But there's a lot to do here. I think, look, our customer is usually the SRE slash OPS team who, and we are basically partnering with their dev OPS team to deliver them an internal service. That's our job, right? So our, the consumer of our technologies are dev OPS teams who, if we didn't exist, they would have to solve these problems anyway, right? This, this is the thing that I think anybody watching this should take away from this conversation. Everything you've seen on my screen, and I want to show on a subset of the problem. If you're not solving one of these problems, but you don't have a choice, you have to. These are not optional things. All of them have to be done for a production environment. It's a lot of work. Find partners who can get you there faster because you're a customer, the SRE team or the developers on the other side don't develop to the left. SRE to the right and then security sort of all around. They all have lots of needs, right? And our backlog is getting, you know, bigger and bigger and bigger. We can help to reduce your backlog on the top. Yeah, I think that's, that's a great way to end on it. And we definitely are going to have you back as we get more of our joint customers using this and, and have shared some of the people's journeys to using this, this approach. It's going to be interesting to see we've talked a little bit about that offline too. So I'm looking forward to doing that with you and to see how this influences. I, I think this is going to be a huge influence on the Kubernetes communities in terms of what people's expectations are for single pane of glass governance operations. The whole ballet wet care that you're showing off is this is something that's whose time has come. We've seen customers and SREs and DevOps teams try and build things like this for themselves. And so it's, it's wonderful to have a partner like Rafi doing it and that we can bring in to those conversations. So once again, Haseeb, as always, we filled up all of that time. If people have questions and need to get a hold of you, the best way to do that Haseeb would be my emails on the screen here. Haseeb at Rafi.co hit me on LinkedIn. Our website is Rafi.co enough people are watching it constantly. Please reach out. And even if it's not, if it's not about buying the product, if you have questions, etc. You know, it's always fun to communicate with smart people who are doing some fun things in the community. This is how we learn. I mean, we've taken the path you've taken because look, people told us, right? This is what I need. If you go build this, it's going to solve a problem. Okay. Listen and you build that. So we are here because people gave us feedback. We'd love feedback. All right. Well, thank you very much Haseeb for joining us today. And thanks to Chris Short, our producer and all of you who have been watching along with us. I'll post the slides from this and some additional resources on OpenShift.com's blog shortly. And this video should be available almost instantaneously up on YouTube in the playlist for the live stream. So wherever you are in Twitch or YouTube land or here in BlueJeans, welcome and keep coming back. We're really thrilled to have you. And if you have a topic you want to hear about, let us know and we will make it happen. All right. Thanks Haseeb. I'm going to use it on the last Haseeb. Looking forward to seeing you. It's all good. Thank you guys. Take care guys. All right. Bye bye.