 It's going to be really, really bad. How many people have ever run a production cluster of Kubernetes? Oh, yeah. OK. OK, so Derek, you're ruining it. OK. You ready? You mic'd up? I think so. OK, so we had a talk scheduled that was features, features, functions, kind of the same one we always do. We decided to blow that up and do something different. So what we're here today to do is to, Chris, stole all of my architect thunder. So I had all these great slides. And I was like, Chris did them all. I'm going to do something different. And so what we did is we're going to do a demo live on stage on Terrible Conference Wi-Fi, although you have a hard line. OK. Hope it works. We hope it works. We're going to do a live demo of OpenShift 4. And we're actually going to go into why we think this is so important. So Chris kind of talked, we're going to show. So Mike, you want to kick us off? Thanks. Next slide there. So everybody look to your left and then look to your right. I seriously want you to remember when you first saw OpenShift 4, because this is going to be incredible what Derek is going to show you today. We're calling this the mother of all demos. You're going to want to build a personal memory around this. So definitely bring it back to yourself. As I said, we're going to show it to you. We're going to get through these slides as quickly as we can. But what built it? Why did we make the decisions we made? If you hit the click here, was it the over 1,000 customers that we now have on the Kubernetes platform that we know as OpenShift? The next slide, or the next? Or was it the 32,000 support issues that we took from the customer base? It was because of the 32,000 support issues. Next one, could it have been the over 17,000 commits we gave back to the Kubernetes open source community? Most of those from the 32,000 support issues. Next slide. Maybe it was the over 500 professional services engagements we do every year to bring our customers to production on Kubernetes. That's the other 15,000 issues. Hit it again. Could have been the 122 releases now of Kubernetes that we have given to the market, or the $250 million we just spent of Clayton's bonus on buying CoreOS. I like to say a quarter of a billion dollars. It sounds more exciting. Or the 4,000 pods per day that get deployed on OpenShift.com, on our starter, our pro, and our dedicated hosted services. So again, we're going to go at the fast of these slides. Honestly, it doesn't matter if you're here and you don't believe in Kubernetes. I don't know why you're here, other than to just be a hater or something like that. Kubernetes was a big multiplier for application operations. We talk about serverless. We talk about DevOps. We talk about all of these things. The end goal is to change how we do software to make it easier to be faster, to be more responsive. As Chris pointed out, it's to make things automated. So Kubernetes has evolved. OpenShift has evolved with it. It's been about building applications and running applications. It's not about computers. We're not, computers are just a thing. It's like the electricity that runs through the wires. It's about software. And software is what we care about and what we're trying to help build because software runs everything. I'm pretty sure that none of us would have made it here without software. We probably wouldn't have been able to get in the door. The doors are probably tied up to some crappy industrial IoT thing that would have just failed and locked us out of the conference center. So if Kubernetes was a multiplier for applications, our goal with OpenShift 4 is to be a multiplier for Kubernetes. And so continuing, we were involved very early in the community. It was about stabilizing Kubernetes, making it enterprise, which various people will tell you that's the worst thing in the world to make something enterprise, which means it actually works and it's probably secure. Although last week's CVE, even then, this is the reality of the world we live in, right? Which is, there's so much software. I get a little, maybe this is a little too down, but I'm gonna say we may never be safe again unless we ourselves internalize this as we have to get a lot better at patching and rolling out software updates. And that's part of that CoreOS DNA. So a lot of our mindset is how do we do Kubernetes in a way that we can build the future on top of it, which means when CVEs come out, you need those fixes before somebody else gets to them. As you know, we brought CoreOS in and we released Container Linux at Summit, has Red Hat CoreOS. It really is a foundational component of this new stack in that it allows us to have an immutable, align to an immutable infrastructure that is extremely lightweight. It means no more snowflakes. It means that we can leverage a more impactful orchestration for you that we sort of were blind to before. And we're gonna see that today in the demo. And so Chris really hit on automated operations, but I don't know that I would have picked such a dystopic picture to put up there. Like refineries are often like the bad things we talk about, but we talk about operators, automated operations. Anybody in this room who believes that we won't need to operate software, please raise your hands. Yeah, I didn't think so. So going forward, software has to be run and managed. What's gonna do it? Well, we could do it by hand, which is what we've been doing pretty much since the beginning, or we can get better at automating it, which we've been doing since the beginning. So this is just a natural evolution of where we're going. CoreOS put gave it a really good name. Thank you to Reza and Brandon and Alex. Like the name of an operator is it's about people and it's about computers working together. It's not just one or the other. So that was all the slides. So now drum roll please for the first time ever. Yeah, yeah, yeah, yeah. We're gonna do a live demo on stage of OpenShift 4. And all of the things we talked about over the last year at Summit in the open source communities, we're going to show you how we're going to take these pieces and put them together. And it's probably gonna break along the way, but that's why we do live demos, because it's exciting. Well, thanks for that vote of confidence. You're gonna do great. So far, so good. Yeah, so I think I'm probably the man and the platform here is gonna be the dog. Hopefully it doesn't bite me. Woof, yeah. So the demo we're gonna do today is in three parts. So the first part here is gonna talk about your day one experience. Since the merger of CoreOS and Red Hat, we've kind of reevaluated how we want our initial experience to be to deploy the platform. And along with that, we've gone and built a new installer. And so this is gonna be a bit like a cooking show. And so I'm gonna try to put a cluster in the oven. And hopefully a few minutes later, one will come out. So the new install is designed to help users from a wide range of skill levels, like the novice user all the way up to the advanced user. And what I'm gonna walk through here is the very novice user. And then we'll talk about the advanced user afterwards. So we have a new tool, OpenShift install. And it's got a few commands. So you can create a cluster, you can destroy clusters. You can find information dependency graphs about how this installer works. Dependency graphs, really? That's because some of our friends at CoreOS love graphs. So let's explore this create cluster command. So what can I do? The most naive user is gonna come here and say, well, what I really want is a cluster. So let's go and do that now. And it's gonna ask me some information. This is a guided wizard-like experience. So it's gonna say, okay, well, you want to go and provision a cluster on some infrastructure. Tell me who the SSHK. And you can pick up and down through. And then you need to- This is the demo, guys. It's the CLI interface. We know everybody in this room loves CLI interface. Yeah, it's amazing. So the next question, you see the top level question it's gonna ask me. It's gonna say, well, what's my domain? And now I have to type a few things. So hopefully this goes well. And I will name my cluster commonsday. And now it gets to an interesting question. It's asking, what platform do I want to install in? Right now, the experience we're gonna walk through is using RedHack Horos on AWS. And we'll grow the installer to support additional platforms over time. So make sure I'm on the right thing. And because I chose AWS, it's now gonna ask me, well, where do you want to install? And so- Don't pick London. We'll probably lose a transatlantic line if you do that. So this is actually installing in our developer accounts. And hopefully I don't hit quota issues. So US East 2 is gonna be awesome. And this is gonna go great. And like that, that's all it did, right? So now the cluster in a few minutes is gonna start provisioning. But basically, it's gonna pick some appropriate defaults and start provisioning infrastructure and create the cluster on my behalf. So a little color, you're gonna see it say, use Terraform to create this cluster. That message is gonna go away. It's not that we don't actually care about Terraform, it's just a detail, right? Like we're trying to focus this experience, what we're showing, on ensuring that you succeed in installing a cluster every time. And some of the choices that you may be used to, historically, are maybe things that you shouldn't do right up front, you should do later. So we'll get to that as we go. Right, so I didn't provision any computers here to use the novice installer flow. It's doing it all for me in the background. So while that's cooking, let's start looking at the more advanced path. So the new install tool works a bit like make. So it has a series of targets that get executed in a particular graph flow. And depending on where the operator wants to come in and tweak their install, they can do just that. So rather than just go direct to create a cluster, let's look at the other things that we can create. So that wizard you saw where it asked me a few basic questions to figure out where I wanted to install the cluster and how. That's creating what we call our install config. So let's step through that and take a look at what that is. Okay, so we're gonna go through the same flow again and give it the key. Uh-oh, typo. It's important to get that base domain right. Is it gonna fluster you if I sit here and point out all your typos? Yes. Okay, so rather than what you saw previously I answered my questions in the wizard and out was a cluster being cooked. This didn't go and cook a cluster yet. What it instead did was go and create an install config file. And I'm gonna bridge what I show here so you don't see some data if this is going on the internet. It's a secret. Yes. So this install config file is just a YAML file, right? And so the information that was collected in that wizard you'll see here as well as some initial defaults that were chosen. So behind the scenes, because I chose to install in AWS in a particular region it's going to create me three masters and by default the installer is going to provision those masters in the set of availability zones exposed in that region. And then the default cluster size for your worker node pool is three replicas. Now I could come in as a more advanced operator and tweak this install config. So I could say I want workers and GPU node types for some other set of machine platforms instead of just master and worker. I could set some default sizes for them so I don't just have three workers, I could have 10, 20, 30. And then depending on the platform that I install in this platform section could let me choose more particular metadata about that host environment I'm installing too. For example, the instance types that might be used. So once I tweak this install config and get it as I want the next step in the install flow is to create some Kubernetes manifests. I would also say at this point like I bet you there's a lot of people who look at this and they say, wow, no, there's like 7,000 other things that I also configure. Part of what our goal is, is many of those 7,000 other things actually aren't things that you should have to deal with. Sometimes they are, sometimes they're things we want to tweak. We work with a lot of sophisticated customers who need tuning parameter X and tuning parameter Y and they need to drill down into these things. One of our key principles and philosophies with OpenShift 4 is don't give you that choice until you actually need it. We want people to be able to script installations and to do the crazy complex stuff if they need to. But upfront we want you to succeed first, tweak later. Yep, and so part two we'll talk through some of that day two operational stuff. So, a lot of people might be familiar with how Tectonic and CoroS's distributed working in the past and present. And they basically self-host a Kubernetes control plane. So in OpenShift 4 we're doing the same basic actions but we self-host a little differently. But the way the installer works is it takes that install config and then creates some Kubernetes manifests. Let's say how the cluster should be applied. So it read that install config YAML file and now it's created some manifests that are literally just regular old Kubernetes manifests into directories. So I have some manifests for some default namespaces that'll appear in the cluster. Default manifests for some config maps and various information like that. And I could come in after the fact and tweak some of these things if I wanted to change how the default queue manifests work for that corresponding install config plane. Now the final piece to how the install works is once you have that install config that's now been translated into Kubernetes artifacts which are just deployments and secrets and namespaces, et cetera, is to create a set of ignition configurations. So if you are familiar with ignition can you raise your hand in the room? Okay, so not everyone is familiar with ignition. If you were a CoroS container Linux user ignition is a agent that runs in the operating system very early in boot and it can read from a remote location to figure out how to configure that immutable host before being fully booted up. I like this. I would say ignition is cloud in it that doesn't suck but since we're being recorded I wouldn't say what I would actually say to that. And part of ignition is we wanna connect machines to a central brain wherever they are and it's not really about the cloud it's about the software that runs on that cloud, right? So you still have to run an operating system somewhere whether it's you running it for somebody else or somebody else running that operating system for you and ignition is that bridge between a machine and a cluster. So remember the goal here at the end of the day is to get a cluster that's running an immutable operating system. So for Red Hat CoroS you need to be able to tell how to configure that operating system on first boot and collect that configuration from ignition. So basically that previous cooking show you saw where I created a cluster that wizard went right through the defaults and went and created some initial ignition configurations that tell how to bootstrap the cluster. So we'll do that here and we'll take a look. And this ignition configuration step has gone and read the Kubernetes manifests that were created previously and actually I missed something, my last step. And that manifest flow besides creating the Kubernetes artifacts it also created some default TLS pairs for how certain components in the system were allowed to interact securely by default. That security that we talked about. We actually, you know, thinking about this is we know a lot of, this is like a big point of feedback is probably of those 32,000 support issues I think something like 31,000 of them were my certificates expired. And so one of the things we're trying to do from the very beginning with OpenShift 4 is not just at the cluster level, not just at the application level, not just at your ingress level but start building out that mindset where you actually have a cluster that can rotate certificates for you because every time a certificate doesn't get rotated a puppy loses its wings. So we wanna get to that glorious future of no one ever gets a certificate expired warning at 9 a.m. on a Friday because they set their alarm for daylight savings time versus regular time. With that in mind, so the installer here at the end of the day to bootstrap the install process it's basically taking these three ignition files that tell various nodes in the cluster how to boot and how to configure yourself automatically. Because we run a self-hosted control plane you kinda have a chicken and egg scenario, right? We need a Kubernetes cluster in order to run a Kubernetes cluster. We overcome that chicken and egg scenario by having our bootstrapping machine. And so that's what this bootstrap ignition file is. And just so you guys can see it's not too complicated. And another thing is like, the point of all of this is that once you have this running is you don't have to go back and reinstall it. And so a lot of what we're trying to do is to build in that idea of if you can change it live this was I think something that Tectonic did really well. If you can change it live you can recover from issues, you can fix problems, you have feedback. You don't have to go reinstall those clusters. And while there are a lot of people out there that would love to have immutable clusters I think based on the evidence a lot of us are gonna have clusters that they're gonna wanna run for five or 10 or 15 years. And so some of this is building in early the foundation of if you can't manage it on the cluster if you can't change it after the fact it's not worth doing. Yep, so just to demystify ignition it's just serving a JSON payload that's read by the machine on early boot. And if you look through any ignition file it's basically saying these are the list of default units I wanna apply. These are the list of files and where I want them located in what on what file system. So this install process that we walked through was able to create enough of a baseline initial ignition configuration to boot up a cluster. And so time to JSON for this demo was about, what was that, 10 minutes. So we promise from here on out we're gonna be a lot less heavy on the YAML and the JSON even though I'm sure there's people in the room who really love that stuff. Okay, so previously a cluster was cooking we'll see where this is going at it's still cooking. Ignore the times it really doesn't take this long even though it says it's gonna wait 30 minutes. And the way the installer works is it goes in by default we'll provision a bootstrap machine and a set of masters. So I asked for three masters it's gonna go and provision me three masters. That bootstrap machine boots reads that concrete ignition configuration that we created earlier and then knows how to serve up ignition configurations to other nodes in the cluster, right? So that bootstrapping machine will serve the ignition config for the master as well as every worker that comes up. From that bootstrapping machine we know how to go and create a XED cluster and a temporary control plane to then know how to install a real control plane on the masters. And then we get to a point where we just install some operators in the cluster and the cluster becomes self-aware, self-managing and we can tear down this bootstrap machine. Does that mean that the entire cluster is now made out of operators? Yes. That leads me into part two. So the install once you run it you never use it again to do an upgrade. Basically the entire cluster now is self-aware, self-managing and entirely automated. So if you're an existing V3 OpenShift user and you were using OpenShift Ansible to both provision and upgrade a cluster, the V4 experience now is basically you use the install to install a cluster and then you depend on Kubernetes native applications like operators to now automatically upgrade and deploy that cluster. So just in summary there you've given us a single command line install experience for Kubernetes. Yes. You've given me a template that I can get as pretty sophisticated with myself if I want it to enhance it. And the thing comes up by itself. I don't have to do anything. It'll be done. And you've separated the tool set from the upgrade tool set. Now it's install and an upgrade. Everyone should be super thrilled on the app. Oh my goodness. Quick, keep holding the time until the cluster comes out. So anyway, the cluster we're baking it'll take about, I don't know, 10 minutes to 13 minutes in that AWS dev account. So hopefully later in the cooking show we'll see that cluster. So let's get to part two, which is I have a cluster. How do I operate this thing? So we talked earlier about operators. Operators really just fancy term for automating things. Kubernetes does a great job in managing and deploying your own applications. But there's really no reason why Kubernetes can't do a great job to manage and automate its own deployment of itself. Where Kubernetes was in v1 versus where Kubernetes is now in v1.13, the world has evolved. And Kubernetes can now support the right primitives for us to actually use it to drive itself. And I think this is actually, I want to take a second there, is Kubernetes 1.0 was a very simple thing. We'll talk about some of the keynotes this week about how Kubernetes has changed over the last four and changed years, four and a half years. It took a long time to get to the point where we believe that all the pieces are in place in the community and the ecosystem to where we can do these kinds of demos. And that work is work the Red Haters and Googlers and CoreOS people and people from VMware and Amazon and 1,000 other companies have contributed to. So this is a commons meeting. I wanted to pause and say that statement that Derek made. It took us a while to get here because big important things take a while to happen. I know that some of you have gone through those 32,000 support issues, and I'm sorry. Part of our apology to you is to come up here today and say that that was actually worth something that's going to make the next five years easier. So in the interest of time, Clayton's going to need to talk a little less and let me get to my demo. So this is the admin console. I'm logged in as a temporary admin user. And this is the console you'll see by default after the install finishes. The console is deployed and you can log in, give you your credentials, and bam, you're ready to go. The critical operator I want to talk about today that drives how OpenShift's distribution Kubernetes can self-automate itself is what we call the cluster version operator. That is the top-level operator that knows how to manage everything else. And so if we click through, and I'll show a little less command-lining it a little more visual here, is I can look at my cluster. And this is reading what we have, which is a cluster version resource. And it's telling me what my current version of my cluster is. This is literally running master in production in a demo. So my current version is what was deployed. And so hopefully, like I said earlier, my previous creation of a cluster just finished. And from that, I can see an update source. And it's telling me, hey, potentially updates are available. Tell me what I want to go to apply. And I could roll out a new upgrade of my entire cluster just by applying a new version of this cluster version operator, which will then know how to coordinate the upgrade of everything else. The cluster version operator is itself just a currentized deployment. And we're going to get a little geeky here. So Derek, I didn't have to log into the R&J network and figure out if there was a new update of. No, you didn't have to do that. Come on. This idea of delivering updates to the cluster, this is the big trick. It's kind of the magic trick that is the cluster manages itself and knows how to pull updates. It knows how to safely apply them. That's just reconciliation. So if I went in and deleted half the things on Derek's cluster, that operator is going to be like, I can handle this. I got it. And it'll bring you back to where you were. Kubernetes is about reconciliation. You say what you want, and we make it happen. That infuses everything that we're doing in OpenShift 4. So I messed up my environment right before demoing. And so it's telling me I messed something up. And the top level operator is trying to apply something that it's not applying. Don't worry, this doesn't always happen. But like I said, I was running master right beforehand. But when I look at it. It stops safely. That's the important part. Yes. And it's telling you something about it. So if I describe the cluster version resource, it's telling me how it's trying to converge to my desired state. I want to run this level of OpenShift. And it's making a series of coordinating changes to the cluster and trying to converge to that desired state. At the end of the day, it's managing a set of second level operators, we call them, which are basically managing the Kube API server, the controller manager, the OpenShift API server, et cetera, et cetera. And if any one of those operators is having trouble converging to its desired state, that gets bubbled up very clearly to the admin that says, hey, there's a problem. Right now, I have a transient problem on one of my masters, so please ignore that for the moment. The way the cluster version operator works is it takes a release payload. So a release payload in OpenShift v4 is just a container image that knows how to start the cluster version operator and then has a set of manifests that know how to install every other operator. So we'll take a look at that. And you can see there's a payload that says, this is the image I'm running that's defining what this cluster is. And we have a new command called OCADMReleaseInfo, which I can feed at that payload and I can find out how that cluster was installed. And hopefully. Come on, conference wireless. So you can see here, this is telling me, OK, you're running a 4.0 payload of a particular version. And that release payload says, hey, these are all the operators and the images I'm going to install and their corresponding container hash. All the cluster version operator that it's doing is just applying the corresponding operators for those components into the cluster. I can get a little more even detailed on that and say, hey, what commits am I running? And my zoom is really big, but you can see my operators are from various components within our Git repos and the individual versions they're running. Excuse me. So like I said earlier, the top level operator we'd run is the cluster version operator. It's just the deployment that's running on the Kubernetes control plane itself. And all it's doing is it's acting like a replica set, deployment a stateful set controller. And it's just constantly trying to converge to a desired state. And so it's not doing anything too crazy. We can actually go look at it and LRSH into that container. And when I say we use Kubernetes to manage Kubernetes, this is the set of YAML artifacts that describe the resources we deploy that say, how you make this version of Kubernetes so. So for every operator we ship, which includes all the regular suspects you'd expect, an operator that can manage the network, an operator that can manage DNS, an operator that can manage certs, the API server, the scheduler, yada, yada, yada. Basically, this release payload that's inside that image is telling the version operator what set of artifacts you could cut or apply to the cluster and converge to where you want to get to to do an upgrade. So I want to do real quick. So time checked, Diane. I know we're running a little bit late. How much flexibility do we have? And does everybody in the audience want to keep seeing this exciting demo? We're going. We're going. Just speed it up. Can everybody live without coffee for like an hour? Can we bring coffee around for everybody? OK. I'll try to go quick. So the point being we have a top-level operator that just applies cube artifacts. So in the same way that you roll out new versions of your applications on the platform, that's literally how we roll out new versions of Kubernetes. And Kubernetes is really good at that type of thing. So if I go and look at the logs for the cluster version operator, it's not like you're doing an upgrade. We're always doing an upgrade. We're always trying to ensure that if the cluster has drifted out of your desired state that it is getting put back into line. So just like the man was getting bit by that dog, the cluster version operator is stopping me from going and changing one of these things. I can't go set a flag on the Kube API server. It will come back and say, hey, don't do that. I'm going to go set it as it should be, unless I go through a formal interface for it. So that's the cluster version operator in a nutshell. And we've gone and spent a lot of time going and building a set of second-level operators that manage the core control plane. Everything. Everything on the platform is literally an operator. From the CNI plug-in to the storage plug-ins to the Kubernetes API server, literally everything. And for the last year, an entire year, 30 scrum teams have been porting the Kubernetes framework into operators and baking in their knowledge of what those framework components should behave. And I'm not going to lie. Some of these are really stupid, simple, because in production environments and distributed systems, stupid, simple works. And so some of these are doing the same things you might do in a control loop as an operator. You run a control loop and you say, here's what I want to have happen. And I'm just going to do that every 30 minutes until the end of time with a cron job if you like to live it dangerously. And so this mindset, again, it's all about just making sure that we know what's happening, we react to it, and we test the living daylights out of that from the first second it's installed on the cluster. So when OpenShift v3 first released right when Kubernetes 1.0 came out, the state of Kubernetes at that time and the state of where OpenShift v3 needed to be required us to package and deploy Kubernetes slightly differently. So you had one Uber start master command or one Uber start controllers command that combined your Kube processes with your OpenShift processes. But then it let us go and do things like RBAC and some other things. Kubernetes has evolved. We spent a lot of time at Red Hat trying to change the upstream to make it that we can get clear layers between our software and the upstream. So what you'll see in v4 here is that we actually deploy Kube API server like any other pod in the cluster. We deploy the scheduler like any other pod in the cluster and same with the controller manager. So the base control plane itself for Kubernetes is just running as pods. So you can see them here. And then there's an operator that's actually managing that to ensure it's where I want it to be. What's cool about these operators relative to existing tectonic users is we've kind of re-looked at how that worked and made it that we don't need something like boot cube recover in a disaster scenario. Basically, this thing is always going to run fine disaster or not, which was really cool innovation from the team. Then the traditional OpenShift API server stuff that gives you all that nice development pieces is just running as a separate operator as a daemon set on every one of my masters. At the end of the day, an operator is just managing a set of custom resource definitions that describe how the admin wants to make their cluster be configured. We have a lot of custom resources. And so for a lot of our operators, like you'll see, we have a DNS resource, an ingress resource. The admin interface by which you manipulate OpenShift E4 is just tweaking these custom resource definitions. And then depending on those operators to converge the cluster to your desired state. Yeah, we've made configuring the cluster just another Kubernetes API operation. So you can say what you want the state to be. You can run kubectl apply. You can put it into Ansible. You can put it into Helm. Doesn't matter. Configuration is managed as a declarative API, just like everything else. OK. So who wants to see what Red Hat CoreOS is looking like? No one. No one is excited to see what Red Hat CoreOS looks like. That's the coffee. They're running out of coffee. So right now I'm running a six node cluster, three masters, three workers. And what I'm going to do here is going to paint some of Red Hat CoreOS developers. I'm going to SSH into a master. I promise in the future they say they're going to taint the node when I do that. But right now it's not there. And so you can see right now that the OS that's being rewarded is that I'm running Red Hat CoreOS for. So if I go in SSH into one of my masters, let's take a look at what is actually on this host. So you can see I'm in and I'm running Red Hat CoreOS for. Awesome. So to clarify any confusion, Red Hat CoreOS is a rel kernel with rel content. So if I go and look at what I'm running, I'm running a rel kernel. Hopefully I can say it again. I'm running a rel kernel. So what we've gone and taken the rel atomic work and we've taken the ignition work from Container Linux and got ignition running on a rel kernel. So on first boot, in order to configure this machine, ignition reached out to its home server and said, how should I configure? And this demo is slow because Derek's a slow typer. I'm very sorry. Nobody changed him, OK. You misspelled successfully. Well, I can't type. Either way, I promise if I had this journal command, you would see that on first boot ignition launched before the whole OS was booted and said, hey, tell me the files I need to lay down on this immutable host to configure that host. And it would have been good. Now, just like rel atomic or Container Linux, I am running an immutable OS. So I'm an apharius user. And I'm going to go and try to touch bad. And this should tell me I can't do that. But I do have a writable layer. And I will go and try to touch. Good. But I have no permission. That's also good. Security folks. SC Linux is on and enforcing by default on Red Hat Core OS. That's Dan Walsh in the back clapping. We're just pandering to the crowd at this point. That took a lot of work from the team to get that working with Ignition well. So congratulations. Unlike rel atomic and unlike Container Linux, we are not running the KubeLit in a container. We're running it as just a native process on the host. And the KubeLit, I mean, the reason is that we believe that the KubeLit and the container run time in the kernel. We've talked about this before. Those are things that have a machine config. And because this is really hard to type accurately. So we're running pretty long on time, Derek. All right. Anyway, point being, there's a set of files that I can configure essentially on the cluster that says how you get there. Now, that's the operators. If Clinton didn't talk so much, this would have gone great. Sorry. So this was the set of second level operators. Also, in v4, we've been working on making sure that not just we can explore an operator pattern, that the applications that you run on the cluster can also be delivered as a set of operators. So you'll see there's an operators tab here. And this is our OLM operator lifecycle manager component that allows you to, as an ISV or as an administrative cluster itself, offer cluster services that other users can subscribe to and deploy in their projects to do things like, if I want to get a managed .cd in my cluster, I can go and interface with the operator lifecycle management component, deploy .cd. And now I can interface with .cd using verbs, using nouns like .cd cluster, .cd backup, .cd restore. Because we're short on time, I'm going to cut that. Awesome. It works. You don't have to worry about it. Go to the final part of the demo, which was machine management. So in v4, we've adopted, there's an upstream project that's been sponsored out of SIG cluster lifecycle called the cluster API component. And that has an interesting set of primitives for dealing with machines. So what you'll see in the administration section here, you'll see some new interfaces. You'll see that there's a machine interface. So what this machine interface is, is this is actually a Kubernetes resource called machine. And it is the base atom for describing a Kubernetes machine. By default, the installer used the machine CRD definitions and this other component that knows how to then instantiate those machines in your target platform to stripe out my cluster. By default, because I was deploying in AWS in a particular region, you'll notice that everything is spread across in the right number of availability zones. This machine interface, as I said, a machine is the base primitive. But just like pods, you have a machine set. And so I'm able to say, OK, in US East 2A, 2B, and 2C, I want a particular number of machines. And if I wanted to grow that machines, as a day two operation, I can just say, all right, well, I want to spend a lot of money and get 1,000 machines. Click Save, please, right now. Save. No, I'm going to do this on demand, so it'll hopefully be cooler. I don't think we have quota for 1,000. As I said earlier, a machine resource is just a YAML definition. This is being derived out of the upstream, and we're adopting this portion of the project and we're really excited to work with the upstream community to see this through fruition. So what I want to do is I don't want to spend a lot of money unless I have to run something that needs it. So like I said earlier, everything in OpenShift, and I mean, literally everything is managed by an operator, including an autoscaler. So I'm going to turn on autoscaling for this cluster. And it's a set of operations I can configure for the autoscaler, but I'll just go with the defaults. We don't have a nice, pretty UI for this yet, sorry. And so by deploying a cluster autoscaler CRD, I've now enabled the cluster autoscaler component. And what I'm going to do, and excuse me while I do this quickly, is tell those machine sets that they're allowed to scale. So by default, they resize one each. And what I'm going to do is deploy a machine autoscaler resource that's going to say, for each of those three machine sets, which are targeted to particular zones, that they're allowed to scale between one and 12 compute nodes on demand. And so as workloads come onto the cluster, Kubernetes literally itself is going to see that, hey, I don't have enough available compute to satisfy the workload. And I'm going to dynamically size up and down my machine set on demand. So let's do that now. I have a job I'm going to deploy to deploy a work queue. And as we said, we don't really think that managing machines is interesting. We think putting machines to work is interesting, and this reinforces that. It's we have an API for machines. That's the last thing you should ever have to think about machines for. They're up to date. They're secure. They're running workloads. That's the goal. And programming the infrastructure to do that for you is, again, something that dog, the man doesn't have to go and say, I want 10, or I want 15, or I want 20. The machine says, I know how to talk to the cloud API. I know how to ask for more machines. Make it happen. OK. So I went and deployed a work queue job that's going to run, I think, 100 parallel pods that make absurdly high requests for the actual work they're doing and require me to spend money to spin up new machines. So there's not enough capacity on this cluster to satisfy this compute need. But because I went and told the cluster autoscaler and the machine set API that says we're allowed to dynamically size these things, in the background, if everything went well, instead of having six machines, I now have about 30, right? Or a little slowly increase. And this is just on demand, creating compute. Yeah, and this demo is on AWS, but we don't really think AWS is special. Oh, I'm sorry. We don't think AWS is any more special than the other clouds or OpenStack or on-premise clouds or, honestly, bare metal. So Chris talked about bare metal, but we don't think that this experience is unique to clouds. We think this is just something that should just work everywhere, and that's part of what OpenStick is about. The way this machine API works is the primitive for defining a machine is agnostic of the platform it's running on. And then there's a very specific provider section that says, OK, if I'm running on AWS or if I was running on Azure or any other bare metal, you can give an environment-specific configuration. And there's a very tiny component called an actuator that knows how to create and deprovision machines on demand. So all we've done right here is done that with AWS. These machines, when they come up, it takes about three minutes for the machine to appear and then be ready in a schedule workload. So I know that we're tough on time. Way over time. But we need a time auto scaler, we're so far. So you can see that the nodes have booted. They're going to start to go ready. And the workload will be satisfied. And the pods will finish running. The cluster will size down. I'll stop spending absurd amounts of money. And in the end, that is managing compute on OpenSHA4. And that will be the configuration of the tool. So there's a ton more in this demo. And you're not going to be able to see it here if you find Derek. Make him do this demo for you. But then we were like, well, this is really easy for us to go do as a development team. So what we said was, well, why don't we just make it available for everybody? So you had a heart attack. Yeah. And this man, I want to say thank you to this man for not having a heart attack when we sprung this on him. But you can go to try.openshift.com. You can do everything that we just did in this demo. Get Pulsei Grids, get access to OpenSHA4. We'll walk you through the process. Derek's demo is up in a Google Group forum. And this is just the beginning. We said we want to be that 10x multiplier. We want your feedback, commons, contributors, and companies. We want to make sure that the way that we're going is a direction that makes Kubernetes and building applications and running applications at scale easier for you. So please give it a try and let us know how we're doing. Thank you. Thanks. Thank you.