 Let's get into the big topic of open source, something that we actually have in mind. This is so awesome. We are an open culture that is actually a really significant process that a developer or let's say as the Kubernetes ecosystem really boomed. Good afternoon or good evening wherever you hail from. Welcome to another The Level Up Hour where we talk about containers, Kubernetes, and today virtualization. Joining Jafar and myself today is Peter Lauterbach, who is the product manager for OpenShift virtualization. Good morning, Peter. Good morning. Actually, I've got several responsibilities here at Red Hat. I work with a couple of teams. One is the traditional virtualization platform, Red Hat virtualization, REV, OpenShift on REV, which is a deployment option for OpenShift. And OpenShift virtualization, which is the new, cool stuff that's actually based on some very stable, old stuff. And that's actually what we're talking about today. Oh, and it's based on Kubeberg. And let's just jump right on in there. What is OpenShift virtualization? OpenShift virtualization is actually very simple. It's the idea of running a virtual machine on a container platform, specifically Kubernetes. So if you think of KVM, VMs, and pods, that's it. So we'll use some language interchangeably. The upstream project, as you know, Red Hat is open source. Everything we do is publicly available. The upstream project for OpenShift virtualization, the feature, is Kubeberg. And Kubeberg's been around. Actually, we're coming up, I think, I'm six years now. It's been around for a while. We went generally available in OpenShift 4.5. But the idea of running a virtual machine, a KVM virtual machine, that you would run on other platforms like REV or OpenStack or REL, running in OpenShift. And everybody goes, okay, well, I wasn't going to do cloud native with VMs, like what's the point? The point is 90% of what's on the planet today running infrastructure, except a little piece in bare metal, is running on virtual machines. And that's including databases, like vast amounts of data, yottabytes of data, business logic that's baked into both Java, and Windows VMs, and then web front ends, whether it's IIS or Apache, NGINX, whatever. And so your Class A3 tier application, now I've got to go modernize this thing. How does that work? Right? So I want to bring that along. And instead of throwing it away and rewriting it, I can actually leverage parts of that. It's I can use the data that I have today. I can use the logic in my middleware that I have today and build around that and enhance that. All right, right on. And I know that we're going to be getting into much more depth with Peter, but we also wanted to kick it off with Jafar driving a demo of this technology. So Jafar, can you tell us a little bit about what we're going to see today? Yeah, sure. So again, thanks, Peter, for joining us here. Let's kick off the demonstration and then we'll see in two more details what we're going to be speaking about. And Jafar, this is a live system, live demo, right? Oh, yeah, we always do live demos. Oh, what could possibly go wrong? Exactly. We missed Randy and his famous quote, but I hope he's recovering well because he's actually going through some surgery, I think. But again, what can go wrong with live demos? So what we're going to have a look at today is what we refer to as composing apps. So basically, an application that has multiple components and some of them are going to be running in containers. And other components, traditionally, for instance, backends, et cetera, are going to be running inside a virtual machine. And just to showcase, basically, some of the nice capabilities that we have with OpenShift, I wanted to have a Windows virtual machine running inside OpenShift, hosting a SQL server database, and have a .NET core application running outside of that virtual machine directly on OpenShift in the container and talking to the MySQL database over there. So nothing really fancy, but just a very straight guestbook app that I have built for this demo today. So as you can see here, you guys can go ahead and log in if you want. I'm going to share the link with you, Scott, so you can give that to our viewers and they can create an entry there, and then we'll check out that it's live within the database. So basically, I'm going to create a new entry here. I'm going to say hello, Jafar. And you see that I have now a new route that's been added in there. Guys, please feel free to do so. So yeah, this is basically the containerized application that you see. And if we switch back to OpenShift, as you can see here, the interesting thing is that I am within a single project that's called virtual machines here. And I can see already that I have two types of resources. One of them is a traditional container that runs the .NET application, as you can see here. It's managed by deployment config, but I also have within the same landscape something that is a first class citizen of the OpenShift platform. And actually, it's a virtual machine. You can see here the kind is VM. And if I go ahead and have a look at more details, I can even have a console that allows me to directly access the virtual machine straight from OpenShift. So that's really amazing, because now I'm managing all of my resources exactly the same way. The networking is managed through Kubernetes, through OpenShift in this case, the storage as well, the routing to our services. So it all becomes natively managed by the platform. And then I have less friction compared to a deployment where my containerized application would be on OpenShift. And then I would have to go through some networking configurations to go out of the platform and then through firewalls and then through the VM networks, et cetera. And of course, I think we're going to speak about that, but there's also different types of organization where people are responsible for different areas or domains. So now we're trying to take some friction out and make the developers' life easier to do so. So did we have any new entries in there? Okay, cool. So I see that yeah, some people have added some stuff in there. So now what I'm going to do, let's check the VM. I see that the VM has an IP address that is on the same networking layer as the pod. So that's really interesting because the VM itself is within the OpenShift landscape. But what's allowing me from the container to connect to the SQL server that is inside the VM is this notion of service. So I can see that I have a service that I have exposed. It's called SQL server that points to my virtual machine and that gets to the 1433 port, which is the SQL server port within the virtual machine. So basically the container and the services within the VM communicate exactly as they would do with the traditional Kubernetes service. So that's totally, I would say, transparent for the developers. But now what if I wanted to access those services that run within the VM from outside of the platform, from OpenShift? So in our case, this virtual machine and this OpenShift platform is running in the cloud. And what I want to do is actually connect to my remote desktop service from my computer. And that's basically what I'm going to do here. So if I switch back to the remote desktop session that I have, you can see that I am accessing a full Windows instance directly from there. And I'm going to refresh the data here. And you see indeed that the data that you have, that you guys have put in there, shows up within the SQL browser that I have. So how am I able to connect from the outside to the remote desktop service that is running on the Windows VM? It's very easy with OpenShift and with the virtualization capabilities. We also have the ability to expose services to the outside. And as you know, the RDP port is traditionally 3389. And now I have exposed what we call a node port that allows me from the outside to access those services. So now if I connect using this node port 31592, I can access the remote desktop service that is within the VM. And if I wanted to access the SQL server database from the outside as well, I could just create the same thing, expose a node port that would allow me to connect directly to the service within the VM. And then I can consume it from the outside world. So yeah, that's part. If I may, that's actually, it's an important point because it's not just, hey, I can connect to my database from the outside world. Any service inside a virtual machine that you would normally like SSH into a remote console access, those are all still available. As long as you expose them through these Kubernetes constructs, your virtual machine that's running inside of OpenShift operates exactly the same as it would on your existing platform. I would also like to point out that typically we wouldn't expect someone from the outside world to be connecting to our database, right? Having it co-located within the OpenShift cluster is probably the preferred architecture to make sure that things like data transmission rates are good and that you're not exposing data to the internet at large, right? Yeah, yeah, sure. I mean, this can be exposed also within the the internet. It's not necessarily the outside world, but basically what it means is there's enough flexibility for you to manage networking both within the OpenShift landscape and also from the outside. So it's not, I would say, restricted to only what's running within OpenShift. So basically it gives you all the flexibility, but everything is managed through the Kubernetes networking layers. So things like SSH connections and stuff that is typically more access controls that are exposed outside of your local network, that's how we do it. Got it. Yeah, and the other important thing is that the virtual machine, you know, the KVMVM that's running on OpenShift is a good citizen of the Kubernetes platform, right? So we'll get into more detail later, right? But the storage you get from PVs and PVCs, networking you get through either the cluster SDN or if you're using a secondary network, all of that Kubernetes-ness that you can do today, the virtual machine can take advantage of. Yeah, exactly. And if we look basically at the virtual machine details, all of the information I get from the traditional OpenShift UI, I can also see that for the VM, I can see the disks, for instance. And in this case, the disks are managed by persistent volume plans, which are the native way Kubernetes manages storage. So basically, I can very easily create storage for my VMs. I have a storage class here that provides dynamic storage for my VMs. So everything is managed just as you would with any, I would say, traditional Kubernetes resources like pods, containers, etc. All right, so that's it. I wanted to give you a very brief tour of this notion of composing app running on the platform. And as you can see here, the value is that I do have a fully working virtual machine within OpenShift. And I can start using those, I would say, legacy services right away when I'm starting my modernization. But as soon as I'm, I would say, far advanced within my development of my microservices, for instance, if I wanted to slice the application into some more specialized content, I can start taking out some parts of the virtual machine and moving them to containers as well. So do you guys want me to show that now, or do we want to come back to that later? Peter, how is that a good... Let's talk a little bit about what the bigger picture is and then come back and demo it, right? Because you're teasing the idea of, hey, using a service in OpenShift to abstract a way, like I don't care if it's in a database or a database is in a virtual machine or container, like that's the whole point. We're hiding that from you because that may change. So if we could bring up a graphic I'm sharing here that talks about modernizing applications, please. So the bigger... Like we've just been talking about a single virtual machine, the bigger picture is most applications are... A three-tier application is probably a dozen VMs at the least. And let's go with the simple, hey, it's a database, middleware, and a web front end. So on this particular chart, time flows down the page. So I've got four different stages that I'm in. And your application is running on your legacy virtualization platform, whatever happens to be today. We actually have migration tooling that will bring in those virtual machines and migrate them just as we do to migrate things to Rev and OpenStack today. That same similar technology is used to bring the virtual machine wholly into OpenShift. And I bring in my database, I bring in my web front end and I bring in my middleware. And that can actually be done live, right? So a warm migration where the application continues to serve users on the old platform and then I pull the data along in the back and it starts converging. And when I want to take my maintenance window, I can actually shut down the service and the application on the original source. As long as I've got my network set up correctly and my load balance are correctly, I pull over the last bit of data, boot that virtual machine, and then suddenly I'm now serving that same service on another platform that for whatever that window is, nobody knows the difference. The users don't care where it's running. So what's the point of this? I meant the first stage, yeah, I've got virtual machines running on a Kubernetes platform. Awesome. We're done, right? No, that's not the point. Virtual machines and Kubernetes are interesting, but now let's do some more cool stuff. So the easiest thing is to actually probably replace the web front end with something a little bit more modern, right? So we can either stay with Apache and containers or nginx is kind of the modern way here, right? So let's replace that part. And now I can actually modernize different pieces of that. And whatever your favorite paradigm is, whether you're using Windows or Linux, to turn this thing into microservices, right? And so now I'm at stage two, and now I can actually take advantage of all, this is really, I'm leaning into OpenShift, right? So all of the things that I would use as a developer, right, which is developer pipelines, service mesh for security and observability, and all that cool stuff, I can actually use whether it's a container or a VM. I really don't care, right? It shouldn't matter to me. And then I can go, okay, now this middleware, you know, it's a big knot of Java that's in a, you know, a couple of gigabytes in a war file, I got to go make that into a modern thing, right? And I can start carving off pieces of that. I can run it in Quarkus, which is our very cool, you know, container-based Java technology. And I can actually stop here on line three, right? Which is if I've actually worked for a database company for a couple of years, right? If you know DBAs, they're very risk-averse, you know, and it's like, look, you can have my database and my VM when you pry it from my cold dead fingers, right? You can continue to run this database in a VM forever, right? So, and then the rest of the application and the dev team can go ahead and modernize that. But at some point, you may say, hey, look, you know, both Microsoft and a lot of other companies have put a lot of money into making SQL Server run in a container platform. What does that look like? Right? And once I'm comfortable with that, I can actually go full containers all along and basically migrate that data into my container base or into my container-based database. And basically the VMs now go away. So, one of the questions that we had planned to ask was how is this different from other virtualization solutions? But I think you've already answered that that it's, it does do virtual machines. But the goal is to not be a virtualization solution, but instead to provide flexibility and options to folks that are looking to change their architecture for their applications. Exactly. Exactly. And it's actually two populations, if you will, right? If you're doing your own software development, and I was surprised, like some of the banks we talked to, they have tens of thousands of software developers that are scrambling to get to the cloud and containers already, right? And let me use an example of SAP, right? Which is a very strong partner of Red Hat. They've got a very strong VM-based application, you know, where they said, hey, look, we're going to turn this into cloud native and we're going to do a whole bunch of work over here. But that's not a binary decision, right? Like, that's not a, I do this overnight. And they said, hey, if there's a way we could leverage some of this database and virtual machine technology that's been stable and solid for years on, you know, Red Hat KVM platform, can I use that in the cloud and in a container platform and then build my cloud native application around that? So there's a project called Project Gardener that we're working on with them together. And there's other customers we can talk about later that are doing similar things. So the long story short is yes, you could stop at a virtualization and not do anything else. But the reality is not only are users themselves building container-based applications, their vendors are as well, right? So even though you get a, you know, an OVA appliance from your vendor at some point, they're going to start shipping you a container-based application. Well, and I think we've seen kind of in the development community at large kind of shift from needing to know all the things to really driving more into API-based programming, right? So they, they can be a specialist in their thing, web front end. They don't have to be a specialist in data storage, backend operations, et cetera, et cetera. They can just use the API that's provided and specialize in their skill set. Exactly. Yeah, that, the database, hey, that's somebody else's problem. And as long as my contract and interface with that database, you know, remains the same, I, you know, I don't care how it's implemented behind the scenes. And this idea of a strong abstraction is, is very important because Kubernetes gives you that. And, and I want to, we kind of skipped over this part, or I think it's worth talking about this part is, okay, I've got a new container platform. I can run it on prem. So we're going to be, you know, what we're talking about today is virtual machines on a bare metal cluster, right? Which is a kind of an interesting concept. But we also have designs to actually run these on bare metal instances and public clouds, right? So you can actually have an open shift cluster in AWS, you know, that's running in VMs on AWS under the hood, and then start provisioning and adding in bare metal instances from AWS and run VMs on that. So, but the other part is, is like, okay, you're giving me a new platform, well, Kubernetes isn't that new, but like, I'm on a different platform running this technology. But I've got lots of old network, you know, I have a lot of legacy network here. I have a lot of traditional storage, right? Maybe I've got big NetApp arrays or Hitachi or HP or like there's all this old infrastructure that's still running those VMs. The beautiful part about Kubernetes is the two strong abstractions for storage, which is the container storage interface CSI and networking CNI, the VMs that we do use those interfaces, right? So I get a PV and I don't care if it's coming from a cloud native storage that's say on open chip data foundation, or if it's coming from a NetApp that's in another rack, that CSI interface and abstraction protects me and allows me to provision storage anywhere in my data center. Yeah, great. So thanks a lot, Peter. And just to maybe elaborate a little bit on what you mentioned regarding, I would say taking those composite apps and shifting them to on-premises or to cloud providers. One of the real values I see here is that if I wanted to take the whole application and so, for instance, go multi-cloud and wanted to deploy the containers and the VMs, with the traditional approach, I would have to cope with the very specific way that each of those cloud providers handles routing, networking, storage, et cetera, and how I need to configure those types of elements. Whereas in here, if I have the open shift as the common ground on the multiple targets that I have, I can just move the data from one open shift to the other, meaning copying, for instance, the content of the persistent volumes. And then I have my VM ready to be kicked off on any of those clusters. So that takes away a lot of those complexities in terms of having to strip out all of the layers around the virtual machines to make them work all together. One thing I'd like to actually ask the audience here, I see there's a bunch of stuff in chat. So it would be interesting to know two things. One is what is your predominant environment that you're doing for software development? Is it Linux-based architecture or Windows? Those tend to be the two camps that people fall into. And more importantly, are there infrastructure components that you're concerned about? Like, hey, how do I use this in open shift? And how do I use this in Kubernetes? So whether it's a NetApp or a KiloPacker 3 par, if you can just type those in the chat, we can address those directly because I think we've been talking in generalizations, but I think like, hey, this is what my environment looks like. What would that be? I think that would be interesting to hear from. Yeah, sure. But maybe before we get to that, I've seen a very interesting comment from somebody called Andrew Sullivan. I don't know who that is. He says, VMs running in pods with the scared smiley there. Would you like to elaborate a little bit on that, Peter? I mean, do we really run? You repeat that. It actually, my thing said I needed to reload and I'm not going to reload. Yeah, yeah. So I said there was an interesting comment from somebody we don't know at all named Andrew Sullivan. And he says, what are you running VMs in pods? And yeah, I mean, can you elaborate a little bit on that? Sure, sure. So right, pods, you know, containers are interesting, but, you know, you can't orchestrate containers without having a sort of a higher abstraction, right? So a pod is really the thing that Kubernetes orchestrates. It does scheduling, it does provisioning, and more importantly, any sort of resource allocation, right? And also the scalability of it, right? Which is, hey, I'm seeing a lot of this, you know, a lot of use in this database or this web front end. I'm actually just going to go create new instances of it, right? And under the hook, let's, if we could jump to, let's see, I think I've got a slide that shows this. If you could bring up the new graphic that talks about containerizing KBM. So what we've got here is three different Red Hat platforms. On the very left is Red Hat virtualization or REV that we all know and love based on Overt. And over on the right is OpenStack, right? Which is a very popular platform for telco workloads and also service providers, right? So I can build an entire multi-tenant private cloud in a very componentized way. The important part is, is the piece in the middle that's actually running the virtual machine is based on QMU, which is the emulation layer and KBM, which has been in the Linux kernel for, what, over, I think we're approaching a decade now, right? And this set of bits and binaries are exactly the same on every Red Hat platform. And rel2, by the way, let's not leave out our favorite. I was just going to notice that being the, the rel person that it wasn't listed, but it's only that set of three technologies there, right? It doesn't have any of the management. Well, exactly, right? Rel is, rel is the base, the basis for everything we do, including ARCOS, right? Red Hat Core OS. And you can, and we do have folks that are running, like, say smaller edge situations or, hey, I only need a couple of VMs. Running, you know, running, running on rel is totally appropriate, right? But you're, you're now responsible for the management piece, right? Provisioning and, and running and managing. And if the VM falls down, you have to pick it up, right? Like that's, that's now your job. And, and if you want to do that, you're with something like Ansible, great, go ahead. But we have virtualization products and layers for a reason, right? So Rev does that, or, you know, does that for a regular virtual infrastructure. But these bits are the same on every platform. So there's two benefits here, right? One is we're not building a new product for every platform. It's literally the same bits and whatever the rel build pipeline spits out, these are the three platforms pick up, right? That's very important. So when you get a CVE, you know, a vulnerability, a security vulnerability, and it's fixed in Red Hat, or rel core OS, that shows up in the next build of either OpenShift or OpenStack or Rev. Like that's super, that's super important, especially what's been going on in the past couple of months, right? So the quality engineering, now I've got these virtualization bits being stressed in different ways on different, you know, platforms that we offer. So the actual quality, you know, any, anything that I fix on the OpenStack platform, the other platforms benefit as well, right? Whether it's a performance optimization or some sort of, you know, some sort of thing that it trips over. So we kind of skipped over the management part, right? Because those are the management layers, really the different piece, right? So in OpenShift, you've got this cubelet that's running the node, but then there's CRDs, right? So a resource definite, custom resource definitions for the virtual machines that are, Jafar was showing earlier in the UI, and that actually controls what a VM looks like, right? So here's my, here's my VM, it's got these kind of CPU memory. A lot of the things that you'll see on Rev and OpenStack, if you look at a YAML for a VM look very similar, right? And, you know, it's got this number of disks and this kind of behaviors. And, you know, then you set the Kubernetes-ness of it, like, if this thing falls down, do I want it to run always? The other important thing that we've got here, you'll see this red curved line from Rev into OpenShift. So for people that are familiar with Rev and sort of the UI, you know, it's got a pleasant UI in front of it and doing that, you can actually manage, you know, start, stop, you do your basic operations and administration from Rev into an OpenShift cluster. So that's a good, I would consider that a transitional technology. But once you get into, hey, I'm managing an OpenShift cluster that's Kubernetes, right? And once you get more comfortable with that, you'll probably rely on that lesson-less and just do things directly in Kubernetes. Yeah, so I think to sum it up, the interesting bit here is that we didn't come up with some kind of Frankenstein-ing to create these new cities and Kubernetes. It's basically using a very rock solid technology that has been around that works the same with Rev or with OpenStack to provide the virtualization. It's just that the delegation to be able to manage those VMs, the networking, the storage is now delegated to Kubernetes instead of another type of component, which is why, I mean, we're able to run all kinds of VMs and we also have certifications that are applied, that are valid for this type of virtualization within OpenShift itself. Yeah, so we do have a question from the audience. Devaton Dickey asks, is it feasible to run a Windows VM running legacy.net workloads? Are there any implications for performance? Or is it more logical to modify the app to run on their .net core in a container? Okay, there's a lot to unpack there. Exactly. I'll start with the easy one. RedHats and our partners are about giving people options and understanding the difference between those options is important. The best example is SQL Server, right? So you can run it on bare metal, rel, you can run it on Windows bare metal, then you can, once you put it in a VM, I can run it on a Windows VM. I can run it on the Linux VM. I can run it in a container-based platform, right? So we're not dictating you have to do it this way or that way. And depending up, I would not figure out like, hey, we're here and we want to push this wet noodle and get there. The right way to think about it is, what do we want the end state to be, right? And if we actually want to get to SQL Server in containers, maybe going the route of converting the .net core makes sense, right? And it also comes with like, what's the talent skill of the developers and the administrators, right? And that's actually another important point. I'm gonna, I do this a lot. I digress here, right? A lot of the skills that IT teams have today, like, you know, VI administration, I know operations, things, Ansible, stuff like that. And then there's this new technology that pops up and Kubernetes came like, like a freight train, basically, right? And it's like, I've never seen any, I've been here in the industry a long time. I've never seen anything change the industry as fast as it has, as an abandoned pack full as it has. So what I'm hearing from when we talk to IT directors and CIOs, they say, look, we're actually now in a weird spot where all the young new folks coming out have this new set of skills. But I've got a whole team of very mature seasoned people that have a lot of knowledge. How do I upskill them and get these folks working together? So it actually becomes a retention and a, you know, how do I attract new talent issue? Right? So back to the original question that he asked about performance. So importantly here, as I said, it's the same set of bits. So for a properly constructed virtual machine that runs on, you know, that's configured the same way that has the same hardware, you will have performance parity. And interestingly enough, we have, we benchmark internally, both compute and memory intensive benchmarks, but also we benchmark database workloads as well. Right? And what that means is, I run this benchmark on rev and get a specific performance. And then I compare that to, compare that to running it on VMs and OpenShift. And they have to be within like 5%. Or I can't, I actually make a release criteria. Like I don't, like I say, okay, dev team, it's slower on the new stuff. You got to go fix that now. And generally our CI CD pipelines will pick that up. And it's almost never an issue before I see it. All right. So we do, we do have some great questions in the chat, specifically there's one about HA, but I would like to hold off on a minute for that one and maybe pass the mic back over to Jafar to finish out our demo. And then we can do some more Q&A from the audience. Yeah, sure. So just to illustrate what Peter has been mentioning regarding this notion of transforming your applications. So it's all about giving you different possibilities and not necessarily dictating a specific approach here. So as we saw, we have had the application running, I mean, connecting to the SQL server database running inside the VM. But I also have deployed a SQL server running inside the container using the Microsoft container image. And it's all running now within OpenShift. So what, if I wanted now my application to consume that database instead of the other one. So it's very simple. The old service that we had that points to the VM was called SQL server. But I have a new one that points to the container that you can see here called MS SQL. So what we're going to do is just go back to the application. I'm going to go ahead and change the SQL server parameter here. And as I save that OpenShift will notice that there's a change that has been made to my image. And it's going to trigger a deployment of a new container to basically connect to the new SQL server that runs within the container. And then it's going to shut off the other one. And if everything goes fine, when I refresh my address book in here, I should have an empty database because now we are pointing to the new one. And there we go. Now I am directly connected to the SQL server running within the container and not on the VM anymore. So as you see, there's very little to change. The networking is managed exactly the same way. I am now talking to a container rather than to a virtual machine. So that gives you this flexibility to be able to slice your, I would say your applications into more specialized microservices maybe and start taking those services out of the VMs and putting them into containers as fit. So again, in my opinion, you should never do that just for the sake of doing it, for the technology part of it. You should always do it if there's value in doing that transformation. And that's part of your modernization journey. You should access your application. You should see what transformation provides more value, what transformation fits within your transformation journey, et cetera. And then decide what's the best path that you should follow for those changes. And I'm sure Andrew Sullivan and myself who come from a little bit more operations background. I twitched a lot when we saw the completely empty database come up. Of course, one could migrate your data from one to the other and make the switch over. Yeah, sure. So we didn't want to make this show about databases that we do provide projects that allow you to basically even live sync the databases. So as I'm writing to the OVM, I can hook up something like Debezium, which will allow me to sync the data between the two databases. And once I decide that it's done, I can then take off the connection. But yeah, the goal here was not to give you architectural lessons about how you manage that, but just showing the transition from the VM to a container. Excellent. And it was super fast to recognize that and make the change. Yeah. Speaking of changes, that kind of thing. We do have a question on HA as well in the chat. So, Peter, let me find it in the chat. Stephen Reeves asks, have we talked about HA VMs? If a cube node crashes, does the VM auto migrate to a new node and is there downtime? The answer is yes. And yes. So let's be a little more precise here. So, and like I said, if you look at the other platforms like Rev and OpenSec, there are really two types of HA. One is infrastructure, high availability, which is, hey, a physical host goes down or it's misbehaving, right? It's really equal to health check. The pod's unhealthy, I'm going to kill it, right? You have to fence the host, make sure the VMs there are powered down and not running anywhere else. So there's a whole bunch of storage stuff going on. That concept does exist in Kubernetes, but Kubernetes is different, right? So normally, the normal time for, hey, I figured out that a node is unhealthy takes somewhere probably about five minutes, right, on average, right? Which is not, you know, if you're serving an application to customers, like that's not good, right? The cool thing is, is like I said, we've got a lot of smart people at Red Hat. And we also work with our, you know, the contributors upstream. And there's newer technologies that are advancing, right? So there's a machine health check that's available today that when you deploy an IPI cluster or install a provisioned infrastructure cluster, it actually turns that time down about 60 seconds, right? So the machine health check, which is an operator in OpenShift will say, hey, I've noticed this node, this physical host is not responding, you know, within 60 seconds. It does all the right things where it says, hey, I'm going to make sure none of the VMs are running. And then I'm going to start those VMs on other nodes, right? There's newer technology that's under development that will actually make that a much tighter timeframe, right? So the idea of, hey, I'm just going to put the thing in a VM and not worry about it. I'm going to let that, I'm going to let the infrastructure worry about it. And that's one of the solutions. The other one is application HA, right? So just as Kubernetes has an idea of where parts of it have quorum and what's running and what's not, the idea that I'm using something like Pacemaker or, you know, Red Hat, excuse me, the Red Hat Enterprise Linux high availability is based on Pacemaker, which is there's things inside these VMs that talk to each other and check for membership and aliveness. You can actually run that stuff today on OpenShift in VMs, right? So if you got an HA Pacemaker cluster somewhere else, you want to migrate it to OpenShift, you can do that today. There's other types of application functionality that are slightly more complex. So things that we know to work today will be like SQL Server always on groups, right, which use file share witnesses and stuff like that. That will work today easily. Things that are a little more complicated like Windows Server failover clusters, right? We've got some work to do there and, you know, sort of the, I don't want to say the grandaddy, but sort of the Oracle Rack type of cluster, there's probably some more work to do there to see if you could run that on OpenShift. So there's a, think of it as incremental. The answer is we have a solution today and it's going to get better over time. And so correct me if I'm wrong, Peter, but this is a very similar management methodology that we use in OpenStack or we would use in the cloud, right? Which is like, if a VM crashes, you need to restart it somewhere else and then get all the services and everything else going on it again. And so, you know, there's no like running a, with all the memory state duplicate VM somewhere that we just push over to, like that's not built into this infrastructure stack. No, it's not. And we kind of skipped over something important. I get asked this question pretty regularly, like, hey, can I use live migration, you know, to move a VM from one host to another, right? Because Kubernetes doesn't have that concept, right? Kubernetes has the concept of if something's running on a host and I don't, you know, it's not behaving the way I want, I don't move that thing. I just kill it and I start it somewhere else, right? Which for a virtual machine, that's like not good, right? So we have this concept that's used, that's been there since the beginning of live migrating of, migrating all the memory state and the compute state from one host to another that's already built into QMU, right? KVM. And that's on the platform. So like if you're doing an OpenShift upgrade and you have to restart your nodes, as you drain that node, you'll see pods go away, but the virtual machines will automatically live migrate to other hosts in the cluster upgrade and then they'll migrate back. So that's an important point, which is if you want to take a node down for maintenance for whatever reason or a VM, like that'll happen automatically. If a VM crashes by itself, though, we will actually restart it on another node if you have the policy set that way. And then inside the VM, it'll go through a crash recovery process. Yeah. So like application layer redundancy and HA, that's a completely different pile of questions. Right. But if you have that technology inside of a VM, that's, you know, VMs communicate to each other, it most likely will work as long as it doesn't rely on any, you know, special storage stuff. I would encourage you to reach out to us and we can talk about that if you have questions about it. There's one other thing. We kind of skipped over the database. You said you didn't want this to be a database part. We actually had, there was an OpenShift Commons session last week where actually one of my largest customers, Sahi Bindun, actually went and presented at their environment. So if you guys, I put in an OpenShift Commons link there, if you can share that. And let's actually go back to the presentation here. So, you know, all right, you guys are talking about individual VMs and you're kind of talking about a three-tier app that's really got three things. Like, what does this look at scale? Right. So this is a, Sahi Bindun is a, it's a web property. It's kind of a cross between sort of an eBay, Craigslist. It's very user intensive. Lots of users on it. It's the fourth busiest site in Turkey. Right. So after the normal, hey, Google, Facebook, YouTube is Sahi Bindun. And they were on a traditional architecture on a, let's say, other virtualization platform that was rapidly going out of style. But the good news is they had a lot of automation around it. Right. They could build things easily. And the team was very small, but talented. And what they did is they deployed an OpenShift bare metal cluster in their new data center. And they used a different technique. They didn't actually bring those old crunchy scale VMs over. What they did is they pointed their build pipeline from one to one platform to the other deployed all brand spanking new, they're a Linux shop. Right. So Jboss, Java, Linux, MySQL, Kafka, and Mongo deployed new applications and VMs over there. And then use database and application replication to pull the data off onto the new platform. Right. And that's a, that's yet another option you can use. You're not forced to just bring the whole crest yield VM over if you don't want to. And now that they have this, and you see the little green boxes here, that's actually an OpenShift data foundation storage cluster. Right. So there's local, there's high speed local storage on, you know, a good chunk of these nodes that is serving storage to the virtual machines themselves. So they've done essentially a lift and shift. It started with probably about 1400 VMs. They've actually built out a second data center and use that same technology. And last time I looked, they're probably in order of 3000, they're north of 3000 VMs in their infrastructure. The other cool thing here is, and again, all the Kubernetes-ness of load balancers and, you know, non-disruptive changes to things, that's all in play here. That little green box on it, we're on the very right. That was their legacy storage vendor that they said, hey, we want to make sure, like, yeah, sure, we trust this ODF stuff, but we want to have a belt and suspenders approach. And we're going to use this other thing just in case. Right. But they've not put any VMs or there's no data on this old traditional array because they've been so successful with this particular bare metal deployment that when they built out the second data center, they just don't have that legacy storage. Very nice. How are we doing on time? We've got about five minutes left. Yeah. But if we're at a natural breaking point, we can certainly... Well, let me see if there's any questions that we missed or comments. And if you can do me a favor, I dropped the comments link to the database stuff, and you can actually see, by the way, I stole this picture from those guys. You'll notice that PMS tend to do a lot of cutting and pasting and acquisition of other stuff that shows our point. You can share that and people can go watch the video and the presentation from our customer directly. One other customer I do want to talk about, that's even larger than this one. It's a large global telco. They have about... The Sahi Bennett is about 50, 60 nodes in each site. This other customer is about... It's over 100 nodes in a single OpenShift bare metal cluster. And about 18 months ago, they said, hey, we're going to build this new web application. It's going to go scale up and it's all going to be containers. And it turns out that two important pieces that are infrastructure, the load balancer and the database, there just weren't container versions of that. And at that point, it's like, okay, let's wait till those show up. And what we said, like, no, there's the opportunity to run that technology, that load balancer in a VM on that same cluster and the database. So it's Oracle carrier grade, which is based on MySQL. They were running that in a virtual machine on this platform. So again, it's this composite... Previously, we're using the term hybrid application, but let's overload it. I like the composite term much better. So the database in a VM, the load balancer in a VM and everything else in containers, they were actually able to accelerate and prove their architecture in 18 months before the container version showed up. And then I checked in recently with that team and they're like, oh, they're not using your stuff anymore. And I was like, that's awesome. Because the whole point isn't to run VMs in Kubernetes long term, though you can do that if you insist. But it's a transitional technology to get you from a state where I've got a VM that has all this important data and logic inside of it. And I'm going to replace that with something more cloud-native container-based. And then I'm just kind of gonna... I'm going to take this database and I'm going to take it to the happy farm upstate and that's where that's going to go. So I was actually happy they stopped using my stuff because they were actually able to get to all containers. So you heard it here first, folks. Peter's goal is to work himself out of a job. That's true for just about any product manager that knows what they're doing. And by the way, that's an important point. I tend to come up on these things and I'm sort of the pretty face that represents the product. I am firmly backed by a very strong team of developers, both in the work in the upstream community that work at Red Hat, both developers, QEs, documentation people. These folks are the real heroes on my team. I'm just the guy that catches the arrows when people ask uncomfortable questions. But I'm happy to be part of this team. Great. Well, thank you, Peter, for joining us today. It was enlightening and educational. Jofar, do you have any final thoughts before we sign off? Now, again, thank you, Peter, for joining us. I think it was a very nice session and we covered a lot of ground. And thanks to everyone who's been watching the show. And of course, as always, like, subscribe and share to the Love Love Hour. All right, everyone. Bye-bye.