 over to Jessica Forester. If you don't know Jessica, she was actually a key player in our keynote demo just a couple of years ago. She's one of our key core contributors to Kubernetes as a whole. She leads a team of people within Red Hat working on OpenShift to make Kubernetes and OpenShift vastly easier to use. So Jessica, over to you. Hi, thanks, Bear. So good morning, good afternoon, good evening, everyone. Let me get started sharing here. All right, so I am Jessica Forester. I'm an architect on OpenShift and similar to Clayton, I've been working at Red Hat on OpenShift from pre-Kubernetes based. And I'm gonna talk about how Kubernetes, it doesn't have to be hard. What's Kubernetes the easy way? So Kubernetes doesn't have to make you feel this way. And I hear a lot, oh, it's so hard, it's complex, there's too much ammo, right? But instead of talking to you in slides, I'm gonna show it to you. So we are going live with demo time for the rest of my talk here. Now, to get started with Kubernetes, you have to have a cluster, right? So you've got to install that cluster. Now, I didn't wanna switch back and forth between the CLI and the browser too much. So I'm just gonna show you real quickly. To get that first cluster installed, it's six pieces of information to our CLI installer. Now, I didn't wanna walk through this one cause I think what's even cooler is the advanced cluster management. So this puts a GUI in front of managing all of your OpenShift clusters. So if I hop over to our clusters page, you can see I've already got a couple of clusters that I've created. We are going to create one right now, give this thing a name and we're gonna pick Amazon for this. Then the version of OpenShift that we want in this case 452. And this provider connection right here, this is just something a secret I've already set up inside of advanced cluster management that's letting me talk to the cloud provider of my choice. In this case, this is my Amazon keys that I've already set up based on main of my cluster. And now I get the flexibility to pick things like, you know what, actually I want this over in US East too. And for availability, I'm gonna load balance this across some availability zones and I can do the same for my workers. Now, if I don't wanna come in and use the UI every time, if I wanna automate this, you can flip on this YAML right here and the data that we built up going through this form, you can now take that, go put that into your GitOps workflows or whatever automated workflow you might have. And I'm going to go ahead and create this cluster. Now, what this is doing in the background is actually going off and using that same CLI installer that I gave you the teaser of initially, it's using that in the background, running that inside a pod. And in about 30-ish minutes, this cluster will get created. Now, I don't wanna sit here for 30 minutes letting you guys watch a cluster install that would be kind of boring, right? So I'm gonna go ahead and hop over and what just to prove it to you real quick, yes, this cluster is, and what this said, this went over to the, there we go. So it's running the install right now, it's going through, it's creating everything over in Amazon. So yes, this really did go and do something in the background, and it's not smoking mirrors, this cluster is creating. So I'm gonna go back over now. So you've got clusters installed, that's great, but installing them is only part of the battle, right? You've gotta maintain those clusters. You've gotta update the Kubernetes version in there, right? You've gotta update the QBlit and the API server and all those things. So we can make it super easy to upgrade these clusters. So right from here, I can see I've got a bunch of different versions that I could update to. I'm gonna go ahead and pick this latest version and upgrade to four, five, eight. I'm gonna jump over and launch over to this private cluster. And this cluster is updating now. So what we're gonna do, we're gonna go look at what's actually happening behind the scenes. So in an OpenShift cluster, there are a bunch of things that we like to call operators and there are little bits of logic that are maintaining this cluster over time. And this is how we're managing your API server, your container networking, your ingress to the cluster. And each one of these operators has a very specific job that it's maintaining. And as the upgrade starts to roll out, each one of these operators will begin to update its particular part of the cluster. You can see it says we're already working towards updating to four, five, eight at this time. So these operators aren't just going to handle updates though. And you can see our first one is starting to fire off here. It's gonna be starting to update. They also actually manage these applications during its normal life cycle. So in the case of SED here, if we go and take a look at SED, you'll see what's actually happening with SED live. So we can see there's no unhealthy members within the SED cluster. We know three members of the cluster are available. So we can see exactly what's going on on each component. Now, if as an administrator, I need to update some configuration for this cluster, that's also managed by these operators. So everything is done through this global configuration. Again, whether it's your API server, your ingress, your DNS. API defined configuration managed by these operators, validated and then rolled out to the cluster. And to show you an example of what one of those configuration looks like, you care about what's going on in your cluster, something does go wrong, you wanna know about it. So we've made it super easy to create what are called receivers, the thing that's gonna get alerts if something goes wrong on your cluster. And whether that's your page or duty, your webhook, your email, your Slack, common ways that you're gonna wanna get information about what's happening. And that same cluster install that I did just a second ago, this is the experience out of the box that you get when you install one of these clusters, easy creation of monitoring. So this monitoring stack, that is to use a little bit, and I wanna show you a little bit more about it. So this is built on Prometheus to get those metrics and you can dig deep into the details if you wanted to of let's say if I wanted no file system. Now you can build complex queries right here. But if you're not a Prometheus guru and you don't know promql by heart, it's also already got dashboards to help you understand what's going on in your cluster. Whether that's knowing that, yes, at CD has all three of its members of them and what's happening with its disk sync. If that's, you know what, I wanna know what's going on with the Prometheus API server. You know, I can quickly go and take a look at that and see how much CPU is it using? How much network is it using? What's the traffic look like? And so these same metrics and dashboards that are providing this experience of understanding the cluster, it also is watching all of the workloads on the cluster to see how much is being used. This enables some really cool stuff with auto scaling. So with Kubernetes, you may already be familiar of some ideas called pod auto scaling. So I have a workload, I have an application, I need to scale that up automatically based on demand. But what happens when your cluster runs out of nodes? You can't schedule any pods. So with OpenShift, we have machine APIs and these APIs reach out to the cloud and automatically create a machine that can then become a Kubernetes node. When you combine those APIs with auto scalers, you get a really, really cool effect. So here I have the three machine sets that were created and machine sets are similar to other Kubernetes ideas like a deployment scales up a pod, a machine set scales up machines. To go with those, I can create these auto scalers and defining them is really simple. There's not much in here, right? It's what's my min, what's my max and which machine set am I actually targeting here? And you get one of these for each machine set that you want to scale. So I've set these up already here and then I've set up a cluster auto scaler which I'll just show you real quick just so you can get an idea what it looks like. Again, it's not got much here necessary and we're gonna make it work. I've said, you know what? I don't want you to automatically scale this cluster out beyond 12 nodes because we don't want to spend a fortune, right? So I've set it to that. And then I told it, you know what? Yeah, when I don't need it anymore, you can scale me down automatically. And the default is if it's not needed anymore, it'll scale those machines back down after about 10 minutes and then rebalance the workloads. So I'm gonna go back and look at those machine sets again. You can see right now, they're all scaled to one, one machine. And over here, I've got a simple workload but I've claimed that it requires an entire CPU for every pod. And I am going to now bump that all the way up to 15 pods. So I'm now asking for 15 CPUs in this cluster. We're scaling, we're scaling and we're stuck. So now what? So what's happening right now? If we go take a look at these pods, we'll see some of them are now in pending. And these pending pods are stuck scheduling. Why? Because we ran out of node space. So we can now see, yep, we only have three worker nodes at the moment, they all have insufficient CPU. And then we have our three control plane nodes which my workload isn't allowed to run on. We can see now though, if we come back to the machine sets, just at that little bit of time, it's already said, oh, you know what? I'm out of space. I gotta start scaling up. So these machine sets have bumped up to two, bumped up to three. And if we come over back to our machines, you see it's provisioned new machines out in AWS at this point in those zones. So that's started up in an ARCOS machine, booted it up. It's gonna pivot it into the latest version of CoreOS that goes along with this version of OpenShift. And then it'll call out, join itself to the cluster and provision it as a node. So that was kind of a whirlwind of the admin experience and how we can make that super easy, but this is DevNation, right? So where's the developer stuff? Well, that's next. I'm gonna now hop into the developer persona. So I am now in the perspective, the developer perspective inside OpenShift. And right here, I just have an empty project. And I've got to get repo. I wanna get started. I wanna get started quickly. Now, if you've got home charts or some YAML that you happen to already have or there's operators installed on this cluster, you can quickly get started with all of those things. But I just got to get repo. So I'm gonna get started there. Now, what it's doing, it's saying, you know what? I recognize this thing. This is a Node.js repo. And I'm gonna go ahead and recommend this Node.js builder for you. And when we create that, that's actually going off and it's fetching the builder down. It's gonna pull in my source code. The build has already launched. And you can see it's just the latest skip that's currently out there. It's pulling it in. It's gonna build it. It will load the dependencies, the Node.js stuff here. Don't worry, this only takes like two more seconds. And it's pushing it and we'll be done right now. So if we come back over to our view, you can see it's already starting up. And so just by putting in that one Git repo, I have builds, I have deployments and I've got routes into my application setup. Now I'm going to switch over to another project to show you one more thing. So what I've set up in here is the exact same repo. The only thing that is different is I already went over into GitHub and put in a webhook. So it's exactly the same, maybe exactly the same way I just configured the webhook. And I am going to take a look right here to this edit source code. And this is gonna load up my code ready for Workspaces environment. Open up Che. And what I'm gonna do right now for you guys is find the H1, say hello to Dev nation. Gonna save that. And I've got all my font sizes zoomed in here. So it's a little hard to see. We're gonna push it up. All right, so that pushed that change out up to my Git repo. And GitHub is gonna fire off that webhook into my cluster, into my Node.js demo app. And you can see it already happened. That new build is running. We see we got our Dev nation commit there that I just made. So same thing, it's gotta pull the code in. Dude's Node.js thing. And it's got its dependencies. And we're pushing that into the internal registry into the OpenShift cluster. It's in. It's launching. Starting up. And it's live. Hello Dev nation. All right, well, so that was a whirlwind. I hope you saw how easy Kubernetes can actually be through all of that. We think about everything we just did in that last 25 minutes. We installed, we updated. We showed how to configure. We looked at how to monitor. We automatically scale the machines in the cluster. We deployed your application and we edited it live in the browser in code ready workspaces. Thank you, Ber and Becky.