 Hey, can you see and hear me? Absolutely, I can see you. Cool. I'm wearing a Coral S shirt. So if it's off screen at any point, just don't forget it. Okay, cool. Let me share my screen and see if I can hop into this real quick. We'll do, see if you guys can see my entire screen now. There you go. Yes, we can. Okay, cool. So the first thing I'm going to do is I'm actually going to run a script that will do a cluster install to DigitalOcean and then I'm going to do a presentation. So the goal here is to take up some of the dead time waiting for the install to complete with a presentation that explains what the script is doing. So let me go ahead and kick that off and just see if it actually starts running without issues and then I'll kick over to a presentation and start. Okay, looks like it's started doing its job. So I'm going to go over to a presentation. Let me know when you guys can see the presentation and then I'll start off. I can see it. Perfect, okay. So this is a presentation about OKD on Fedora Coral S on DigitalOcean. My name's Dusty Maeb. I'm a software engineer at Red Hat. I also have Neil Gampa who's going to talk about his interest in DigitalOcean as well and help me fill some of the dead space in the room or the dead air in the room when the install is taking forever. So Neil, would you like to introduce yourself to? Yep. So my name is Neil Gampa. I'm a DevOps engineer at Datto and I'm a member of the OKD working group. I mean, professionally I'm sort of interested in running OKD on OpenStack but personally I don't have money for that and I don't have money for the cloud. So OKD on DigitalOcean is a rather affordable way to kind of pry it out and use it in a semi-production-ish kind of way. So that's what I'm here for, just to kind of pretend to be the idiot to kind of help explain everything that's going on and just, as Dusty said, fill in the dead air with the very slow and oddly not very fulfilling screencast of the installation process. And don't let Neil fool you. He is, he can pretend to be the idiot but he's definitely not. Okay, cool. I'll hop in and do a little short presentation. So this OKD running on DigitalOcean is kind of a blog post series that I kicked off a little while ago and I'm slowly releasing new content for it. The first blog post that I did was talking about preparation, things that you need to consider before you're able to set up and run this script that I've created and there's a link to it here. I'm hoping I can share these slides somewhere but if you don't want to type this full link out, just go to DustyMabe.com and it's in one of the last two posts that exist. And what I'm gonna do now is I'm gonna actually give a short overview of that particular blog post. More or less what you do for that one is you go, you grab the OKD software which includes the OpenShift installer, the OC binary and the Cube CTL binaries just in case we'll need those for different things and it's nice to have them versioned together. The OpenShift installer we definitely need because we need that to generate the information that we'll use to kick off the cluster. You need to grab the DOCTL binary which is DigitalOcean's command line client. And then you also need to set up API keys for access. So one of them is DigitalOcean access token. That's what we use to talk to the API to bring up new droplets. And we also set up two environment variables that are the AWS access key and secret. This seems a little weird but this is used for access to DigitalOcean's object storage. So they reuse the S3 APIs for that. And so we actually use a different tool that has different set of credentials. And I'll explain why we do that here in just a minute. Okay, the next thing you want to do is choose a domain to use with OKD. So basically OKD needs entries in DNS in order to operate properly. Oh, actually I messed up this slide because I shouldn't have more information on there but more or less OKD needs entries in DNS. And what you'll want to do is you'll want to go to your registrar and set that up and point the entries or the entries for that particular domain over to DigitalOcean's domain servers so that DigitalOcean can essentially manage that subdomain for you. Okay, the next thing is actually doing the deployment. This is the second blog post in the series. And this one actually talks about the automation script that I created and also what each piece of it does. So I created an automation repo that has the DigitalOcean OKD install script in it and a few other things. One of them is a resources directory that contains a few files. I'll go over those here in a little more detail here in a second. And then there's also a config file which has user customizations in it. And then there's the script itself which I'll go over a little bit more here in just a second. But first of all, let me dig into a few of those things. So we have the resources directory which has a few files in it. One of them is an install config for OpenShift. And this install config is just about as bare bones as you can get. And it also contains a few things that get substituted into it. So this is actually a template. So I just called it installconfig.yaml.n. So this is a template that a few different tokens get substituted out. One is base domain, cluster name, number of OKD workers, and the number of OKD control plane nodes. So that is just a basic install config that we feed in and substitute a few values out of. And then we also have a few FCCT config files that are used to either, you know, make some configuration to the nodes and or work around some bugs. So for example, if I look at the bootstrap node here, there was a bug in Fedora Core OS. It's already been fixed at this point, but I'm leaving this in here for illustration. But there was a bug where the host name on DigitalOcean wasn't getting set correctly. It was getting set to localhost and not the actual, you know, name of the node that you gave it in DigitalOcean. So this was an example of running a script called hostnamectl, runhostnamectl via the sethostname.service that you could use to work around that particular bug. So this is an illustration of something that you could do to work around an issue in Fedora Core OS if it happened to exist at the point that you were running this. And there's similar ones for worker and control plane as well. So if you had a customization that you needed to make specifically for a particular type of node, you would do it there. These files themselves incorporate the output of the OpenShift installer. So it will merge in the config that is in the generated files directory and the worker.ignition file. So that basically pulls in what the installer spit out and then merges it in with this other information that's in here. So it's kind of like an additive thing. Real quick, I'll go over the config file. So if we look at this file, it basically is a bunch of key value pairs that allows somebody who is not me to come in and edit this to be what they want it to be. So for example, when you choose your hostname or sorry, your domain name, you can go in here and set this. Mine was okdtest.dustymave.com. So if you had your own domain name or subdomain that you wanted to use, you can update these two variables. If you wanted to change the number of workers or control plane nodes that get started initially by the script, you could change these two variables to the different numbers. I think the minimum number of control planes recommended is three, so I wouldn't go below that. I know Christian I think showed an example of just doing an all-in-one earlier, maybe. You can change the region that you want to use. So if you don't want to use NYC3, you want to use something maybe closer to your locale, you can do that. The one thing you will need to update is your key pair. So whatever SSH key that you want to use when you create an instance in DigitalOcean, I believe it's required that you actually specify an SSH key, so you have to specify one, so you have to update this information right here. The size of the droplets that you want to use, you can run the OCTL compute size list to get more options there. The Fedorcor OS image URL, so this is basically a path to the DigitalOcean artifact and that will be used to create a new image in DigitalOcean for you. So you don't have to do that yourself. It does that via the custom images workflow. It also will basically derive the name of the image from this and it will name it that. And if you run this script multiple times, it won't attempt to recreate one. So if you go and look at some of the first output from the script, it says image with this name already exists, skipping image creation, so that saved us a little bit of time. Let's see. And then a few other things that you can change, what you want your registry volume size to be. Hold on. Stuff like that. So to be clear with the Fcause image URL thing for your DigitalOcean OCTL install, that's going to, if you don't have it already made, it's going to automatically convert it for you and make it DO ready. Yeah, so it will basically call into DigitalOcean's API and create a custom image for you using the same URL that you provide. So yeah, Fedora Core OS outputs DigitalOcean images. So usually the normal workflow for you for finding out where that is, would be just to go to our download site and go grab it from here. So you just right-click and copy that link and then use that as what you provide to the script. Oh, so we already have a DO version. It's just that you get an upload to get into DO. Yeah, the only thing that's missing right now is with DigitalOcean, with the images that they provide, they don't do DHCP by default, I don't think. So there's some networking work that needs to be done on Fedora Core OS side to support their static networking config. But with custom images, they do DHCP by default, which is something we already support. So DigitalOcean images work with the custom image workflow, but not with the provided image from DigitalOcean right now. So that's why this script will automatically create it for you if it doesn't exist. Okay, and let me hop back over to the presentation. So the DigitalOcean OKD install script itself does quite a few different things, and this at a high level is what it does. The first thing it does is create a faces, also known as S3 bucket, to hold the bootstrap admission config. The reason we do that is you don't, first of all, we already have access to DigitalOcean, so this is assuming that you can use another DigitalOcean service seems safe. And the other thing is you don't wanna just put that particular admission config anywhere because it has secrets and stuff like that in it. The other thing is you can't actually provide that admission config directly to the instance on boot because it's so large, it's above 64 kilobots, which means DigitalOcean's user data service will basically say, sorry, your user data is too large. We can't, you know, when you try to do the API call to create the instance with that user data, it won't allow you. So what we needed to put reference to this admission config, and the safest place was to put it in S3 and then use a pre-signed image URL to grab it on boot. And that image URL that I'm using is only valid for five minutes. So it's just a way to kind of lock it down as much as possible so nobody else can grab your configs and somehow take over your cluster. The second thing it does is creates a custom image in DigitalOcean for the linked Fedora CoreOS image. We talked about that briefly just a minute ago. The third thing, it creates a VPC for private network traffic. So that way, you know, anything within the VPC is good. You don't have to worry about it affecting your other nodes that exist in your account. It creates a load balancer to balance the traffic. It creates a firewall to block against unwanted traffic, generates manifesting configs via the OpenShift install binary. It uploads the bootstrap config to spaces to be retrieved by the bootstrap instance, creates the bootstrap control plane and worker droplets, creates a DigitalOcean domain and the required DNS records. It provisions the DigitalOcean block storage, CSI driver to the cluster, and then once everything is up, it removes the Bootstrap droplet and the spaces bucket since, you know, those are no longer needed. One thing that I will mention, at least for right now, I took a shortcut with the automation script. I only create one load balancer. So normally what you would do is you would create a load balancer for like the control plane nodes and then a separate one for the worker nodes where the routers are running. But to simplify things, I just created a one load balancer that has the control plane nodes in it and I modified the ingress routers to run on the control plane nodes instead. That's not best practice. It just happened to be something that simplified the script quite a bit. And yeah, I might change that in the future. Okay, yeah, and there's more posts I have to come in this series. I wanna start talking about like configuration, for example, your certificate so you don't have self-sign certificate warnings every time, you know, if you wanted to share this cluster with somebody else and then also, you know, setting up identity access, like for me, for my personal blog, I use get lab identity access to log in, you can use many different ones. But okay, and this is my shameless plug for Fedora Core OS, but I know we're talking about OKD today. So let me hop back over to the install and see kind of where we are. And I'll talk briefly about where we are and then I'll see if anybody has questions or if Neil wants to jump in and tell me what I should be doing differently. Okay, so where we are in the install process right now is first off, a check to see if the image that we were gonna upload already exists and it appears that it does. So it basically looks for an image with this particular name and it assumes that if an image with that name exists, then it is what we wanna use. You know, it's theoretically possible somebody could specify an image with that name that wasn't that content, but, you know, we're gonna make that assumption here and we should probably be okay with that. Okay, we created a VPC and a load balancer and we waited for the load balancer to come up because the load balancer IP itself is something that we use in later commands. So we wanted to wait for that to come up so that we could grab that IP. We created a firewall. We generated the manifest for the install from the OpenShift install script. We created the droplets and then there's an informational command that runs that basically prints out all the droplet, you know, names and the IPs. This is useful if you kinda wanna hop in and debug something on boot, maybe. It created the domain and the DNS records and then it also ran OpenShift install to say, hey, let's wait for the bootstrap to come up. So the bootstrap came up. And let's see, it took 11 minutes for the bootstrap to come up and finish and then it removed the bootstrap resources. So the bootstrap node is now gone. That S3 bucket is now gone. So you don't have anybody able to pick up that config and use it or whatever. It's gone out of S3. And now what it's doing or what it was doing was waiting for the workers to come up and make certificate signing requests. And those all have been approved now and it's moving the routers to the control plane nodes. I mentioned this earlier, so we only have one load balancer. So what we needed to do was move the routers over to the control plane nodes. So what it's doing right now is before it can move the routers to the control plane nodes, the routers need to actually be up and created. So it's waiting on the cluster to create those so that it can then move them. So right now the cluster, you can see we have three nodes, okay, you control one, zero and two workers. And we have quite a few odds that are, oh, cancel, that are kind of up and running. You can use control Z in here to see which ones are kind of in a not running state at this point. And if there's anything you might need to investigate as the cluster is coming up, the cluster operators themselves are in the process of coming up. But what we'll do is we'll hop back over here and just wait for it to finish the install. One other thing I'll do real quick, is I'll just go browse around our, oh wait, that's wrong. Go browse around the account real quick and look at, so these are all of the ones that we brought up today. This is the domain that was created just now with all of the different DNS records that are needed for the cluster. And let's see. What other thing, networking. This is the load balancer that was created. And the way these things are set up for the firewall and the load balancer, they're based on a tag that's given to nodes. And so for the control plane nodes, I gave it a tag of OKD test dash control. And so all nodes that match that tag will be a part of this load balancer. That means that if I add another control plane node in the future, as long as I create it with the appropriate tags, it'll be automatically added into this load balancer, which is kind of nice. And the same goes for the firewall. So the firewall itself matches on a tag of OKD test. And so it matches all of the nodes in the cluster right now, which is kind of nice. Okay, so while we wait for this, do we have any questions or Neil, do you have anything to add right now? Neil is frantically multitasking with two different calls at the same time. So I think this is the first time I've ever seen Neil quiet. So he was hand waving and everything on video a minute ago. I'm not. And I've not seen any questions right now, which is great, actually. Are you back? I am kind of back now, yes. So, I mean, you did a good job, Dusty. You like actually explained basically everything. So the one thing I was wanting to ask was, how flexible is this deployment strategy? Like, is it really easy to two knobs to make it bigger or smaller? Or like, what kind of knobs do you have with this deployment strategy? Yeah, so what I have right now is this config file, which essentially has a bunch of key value pairs in it. There are, I try to make it somewhat flexible without making it too complicated. So like the easiest way to be flexible is to adjust the number of workers and the number of control plane nodes just with this. So like, if you just wanted, you know, a bare minimum cluster with just workers in it. And, or sorry, with just control plane nodes in it, basically what you would do is you would just mark the number of workers as zero, and it'll just bring up control plane nodes and it will do that. They'll both be, the control plane nodes would be both, you know, master and worker role. But in this case, I decided to bring up three masters and three control plane nodes and two worker nodes. But you can easily change that if you want to just by changing, you know, these variables right here. Different region is something that you can do. One thing that's kind of inflexible right now is you can't droplet size for your workers versus your control plane nodes. So it just uses a uniform instance type for all of them. And the other thing I think I kind of mentioned or touched on was the load balancer, not ideal configuration that I have, which is, yeah, that one's not great. Might be something I need to switch up in the future. But I mean, I don't know. I don't think it's like super flexible, but I don't know how flexible you need something like this to be. And the other thing is it's also written in bash, which I don't want to go too far there with making things super flexible. So it's not ugly bash, but it's still bash that I lose everybody. Nope, we didn't lose anybody. We're all just- So how much does this whole thing like, how much is this going to cost you right now with essentially six nodes? Yeah. Yeah, so the cost is not quite what I would like it to be. So I've got some things that you can do to bring the instance size down that I haven't quite published. I was going to make that a follow up blog post. But let's see, what do we have? We've got the 16 gigabyte size instance. And if we look at the one that I'm using, it's probably not, for this one, I'm using the 16 gigabyte instance because that is what the OpenShift documentation recommends. For my personal cluster, I'm using the eight gigabyte instances, which are half the price. But this particular cluster with, five nodes at 80 bucks a piece is, yeah, it's going to cost you some money every month. And it's probably not going to be something you're just doing for fun. At least it's better if you were doing this on AWS a little bit. Yeah, yeah. And like I said, I've got some things that I've done in CVO overrides, which basically cuts some of the more fancy features out, but aren't really, if I'm just running my blog and a couple other things, I don't need that, right? I mean, honestly, if I'm just running my blog and a couple other small things, I don't really need OKD, but I use it as a- You're okay to move over here. Yeah, I use it as a, as a tool to learn, right? I want to learn this stuff and I want to, be up to date on what the state of the art and the Kubernetes landscape is. And I think this is a really good way to do it. I think the nodes that I'm using are like these eight gigabyte nodes from my personal cluster, which are half of that. And I only, I don't run any worker nodes. I just run three control plane nodes. I just run three control plane nodes, so, yeah. Oh, at least it's moderate. You can, you could kind of eat that cost. Yeah, it's better. But yeah, so let's see. It looks like- Can we start with waiting to Ingress controller has been going for so long? Let's see, let's see. All right, so, what is the magic button? There's a button. What does K9S think that you just started randomly using? It's a, it's basically like a text user interface for Kubernetes. And it just allows you to browse around your cluster without needing like a web browser or without running a whole bunch of commands. Just directly in your terminal. So I find it a lot easier to use, especially when you're just learning Kubernetes or OpenShift. Let's see, Ingress. So the Ingress is not up yet. Let's see if we have- Don't be sad. No, no, we shouldn't be worried. Although, you know, I actually don't know how long it took earlier when I brought up the node. So- So it just takes a while. Yeah, I mean, it's just part of the cluster, right? And it takes a while for an OpenShift install to complete, unfortunately. Let's see, let's see. What do I want to do right now? Let's go look at the pods and kind of see where we are in. So it says all of them are in a good state right now. And let's see which ones were brought up recently. So yeah, I think it's just progressing through bringing up the cluster. And Aaron is asking, could the cluster just be built as three nodes with the scheduleable masters? Yes. So if you just want three nodes with control, you know, just three control plane nodes, basically all you do is you come into your config file and you just replace the number of workers with just make them zero. So in the documentation, which is not really documentation, it's just a comment. I mentioned the minimum number of workers is zero. And, you know, according to the documentation, the minimum number of control plane nodes is three. Although I think Christian showed earlier, you can run like an all in one node. But that's not something that this script necessarily handles. Yeah, and basically if you have no workers, then the control plane nodes will be marked as scheduleable. So I think that's just something that the OpenShift installer does by default. I know Charo mentioned this earlier, and I was gonna ask him when he mentioned it, but if I look in the resources directory at the install config, I substitute in whatever values are set in that config file. And so if that value is zero right here for the number of workers, the OpenShift installer will automatically say, okay, I have zero workers that's being configured. So I'm gonna mark the control plane nodes as scheduleable. And, you know, in this output when we're looking at the nodes, these three nodes up here would be, they would have two roles. So they'd be master and worker. So yeah, if you wanna change that to just run three, all you have to do is update the config file to have zero worker nodes. And that should do it. That's pretty much all you have to do. And that's actually how I started with this script. I didn't start with worker nodes. I just started with control plane nodes. And that's part of the reason why the load balancer is still just one load balancer. It's just much easier just to have one with how I set everything up with tags and whatnot. Okay, so the ingress controller was created. So we can go look at that now. So that exists and we actually updated it so that it would run on the control plane nodes themselves. So let me get out of that. And we will go over here. All right, so the ingress controller was created and then we updated it so that it will move them over to the control plane nodes. And if we wanna go look at that particular code, it basically waits until the ingress controllers exist. And then it patches them to basically ignore the no schedule. So control plane nodes by default, you don't typically schedule workloads on them. So we had to modify the ingress controller to ignore that. And then also to make it match on the control plane nodes so that it would get scheduled there. So we patched that. And then the other thing we did was we wanna make the router run on every control plane because the way the load balancers are set up, it routes graphic to all of them. So we set the number of replicas of the router itself to be the number of control plane nodes that we have. So if I go look at we have three pods in the OpenShift ingress namespace that are the routers. So they're running on control plane zero, one and two. So that's all of them. And let's see where we are now. So now it's basically waiting for the cluster to come up and then install. And it shouldn't be that much longer just because it's been a lot of time in this step. But while we're waiting for that to come up, I have to ask, you bring K9S into Fedora. Did I? Yes. I haven't yet. I have been meaning to run go to RPM on it to see how much work would be there. Yeah, yeah, actually I've been wanting to do it for a while, I just, yeah. Last time I had, I was in that setup and I was doing some rust packaging. I was planning on just running it against K9S. And then I think my box got shut down. It's kind of funny. So like I did a lot of research recently into why my box shut down. And in the logs it says power button keep was pressed. And I was like searching Google. I'm like, why would my box shut down and tell me the power button key was pressed when I know I didn't press it? And then somebody on Stack Exchange was like, are you sure you didn't press the button? And it turns out my wife had let my kid come up here and my one and a half year old come up here and get around my computer and stuff. And she was like, oh yeah, he pressed the button. Well then, there you go. That explains it. I was like, all suspicious. I was like, this box is like almost 10 years old anyway. Like, oh my gosh, I need to get something new. I mean, if you had just let it go, you would have had a great excuse to get a new computer. I know, yeah. 10 years old computer is not a great thing to be working on in 2020. I know, yeah. I've been meaning to get a new desktop but it's something I've put off for a little bit. But I mean, honestly, the performance of this thing is pretty good. And so it has, I mean, you know, Linux does a good job of harnessing those resources anyway. That's true. So do we have any questions in general about kind of how all of this is set up? Does anybody want me to poke around in here and show kind of what resources are created or whatnot? So I'm wondering if you haven't finished deploying correct yet. Oh, not quite yet, no. That's what I thought. So no, there aren't any big questions. And I think someone put the link into your K9S blog post in the chat. So I think there's a lot of people are very interested in that. So that'll be cool if you can get more folks using that. Yep. Yep, so that's in the chat. And if you don't like typing links, you can just type dustymake.com and it's like the third one down. So it happens to be there. I like the new logo on your blog. Thanks, yeah. I had to have the default one for a long time and I was like, I really wanna change that little OctoCat to something different. So I finally broke out GEMP and started trying to merge the Project Atomic and the CoreOS logo together. And this is what I ended up with, so. Well played. The logo is super neat. I just got, I saw that like the other day and I thought, this is super cool. So maybe while we're waiting, talk a little bit about Fedora CoreOS and where that's going, what that community is. I know you did a shameless plug for it earlier, but maybe we've got a few minutes here while we wait for this to initialize. Yeah, as much shameless plugging as possible. I mean, we're gonna be waiting for the thing to come up anyway. Yeah. Yeah, okay, give me one second. I'll see if I can find a good presentation that I might be able to sponge off of here. And while we're doing it, I just wanted to say that we are in the OKD side of the house incredibly grateful to the Fedora and the Fedora CoreOS community, as well as the Atomic folks that preceded it and that for all the efforts that went into making Fedora CoreOS what it is today. And we really have been doing some real tight coordination in our releases and their releases and look forward to continuing to collaborate with the Fedora community. I think this is an exciting new chapter for Cloud and Fedora and OpenShift. So thanks for everything you all are doing. Yeah, absolutely. So I'm gonna reuse a presentation that I gave a little, maybe a week ago or so. And I've been making the rounds with this presentation, so if you've seen it before, I apologize. But I'll just briefly go into what Fedora CoreOS is and I might blow over a few of these slides, but that's okay. So Fedora CoreOS itself is an emerging Fedora edition and it kind of came from two different communities that we've put together. One of them is CoreOS Inc's Container Linux community and then also Project Atomic's Atomic Host and Project Atomic was primarily backed by a lot of different people in the Red Hat ecosystem. And Fedora CoreOS itself kind of incorporates the Container Linux philosophy provisioning stack in cloud native expertise. And then Fedora CoreOS also incorporates Atomic host, Fedora Foundation, the update stack from Atomic host and the SC Linux enhanced security from Atomic host. And some of the features of Fedora CoreOS are automatic updates. So this is one thing that Container Linux was core to Container Linux's value and something that we decided to also pick up with Fedora CoreOS. So with automatic updates by default, we need them to be reliable for people. So in order to not break people systems, we try to catch issues in several ways. One of them is having extensive tests in our automated CI pipelines and then also having several update streams to prevent what's coming from landing and stable if users happen to find issues. Sorry, to preview what's coming. So users run various streams to help find issues. And then we also have managed upgrade rollouts over several days. And what that allows us to do is find issues with updates and stop and upgrade rollout. So if the first 10% of users that got an upgrade had issues with it, we'll stop the rollout so the rest of the 90% basically don't ever get affected by it. And when things go wrong, people can always roll back to the previous version. Hopefully that still works for them. But for OKD itself, the updates are kind of managed by the cluster. So some of this doesn't quite apply as much there. I mean, it does, but in a different way. So like as far as different update streams, OKD doesn't necessarily take advantage of that. Although in the testing infrastructure, we do. So we try to catch things in OKD before it hits users as well. And in Fedora CoreOS, these update streams that are offered are next, testing and stable. Next is kind of experimental features and Fedora major rebases. So our next stream should be moving over to Fedora 33. Sometimes soonish. Testing is a preview of what's coming to stable. So usually that's just a point in time snapshot. And that will end up in the stable stream in a few weeks time, assuming that people don't find issues. Stable is just the most reliable stream that we offer, which is the promotion of testing stream after some fake time. And the goals of having these multiple streams is to publish new releases into update streams every two weeks and then find issues in the next slash testing streams before they hit stable. So our users happily leave automatic updates on and don't disable them. For Fedora CoreOS, we have automated provisioning. So Fedora CoreOS uses ignition, which is something that Container Linux also use to automate provisioning. So the idea here is any logic for machine lifetime is encoded in the config. So in the case of OKD, all of that information for Bootstrap is in the config. And then the node comes up, joins the cluster, and then the cluster itself kind of manages them. So it's kind of a hybrid approach there. And then for these nodes, whether you're starting in the cloud or on bare metal, you use the same starting point. So you use an ignition config either way, which is kind of nice. There's some more details in here about ignition. I'll skip over those. Being cloud native and container focused, so basically software runs in containers, that kind of helps our reliable update strategy. Having the host basically be host software and then a container runtime and doing that well. And then any applications running in containers makes it easier to upgrade more reliably. But in general, Fedora CoreOS is ready for a cluster deployment. So you can spin up 100 nodes, having to join a cluster, and then spend down the nodes when they're no longer needed. If we have enough time, I'll actually demonstrate adding a new worker node to this cluster here in just a minute. And then Fedora CoreOS itself is offered on or for a lot of different cloud slash virtualization platforms. So we have Alibaba, AWS, Azure, DigitalOcean, ExoScale, GCP, OpenStack, Vulture, VMware, QMUKVM, and we're trying to add more all the time. OS versioning and security is another feature that we like to mention. So for example, if you run RPMOS tree status on a machine, you will get a specific identifier that says, all right, you can take this particular either version or hash and share it with somebody and say, hey, I'm running Fedora CoreOS, this version, and I'm seeing this problem. And that single statement tells me as a developer on the platform, almost everything I need to know. So it tells me exactly what version of system do you're running, exactly which version of kernel you're running and all that. So that's very valuable in my opinion. It also uses read-only file system mounts. So theoretically, anything that you have delivered with the OS, the software hasn't been changed. So for example, if somebody accidentally does an RMRF, they happen to be on the system playing around, it'll prevent issues there, and then also unsophisticated attacks. So if somebody happened to get onto the system and try to run a script that was kind of a dumb script or whatever, it wouldn't allow you to modify the existing software on the system. But for more sophisticated attacks, we also have SE Linux enforcing by default. So hopefully if one of your containers does get compromised, it doesn't gain any further access to the system. It just can modify that single application. And as far as what's next, we wanna add more cloud platforms. We wanna do multi-arch support for Fedora CoreOS. So AR64 is the first one we're kinda chewing off, and then hopefully we can add support for PowerPC and S390X. We wanna make FCCTs, like the configs that are used to generate ignition, a little more human-friendly, making some common things that people do easier, and then also host extensions for software that is just extremely hard to containerize and maybe like a small system utility. We wanna enable people to be able to package layer that stuff and not have issues with their upgrades. Right now, if you package layer stuff, your upgrades can pretty easily get in a situation where versions of things don't work well together and the upgrade won't actually go through just because, which is a good thing because the upgrade catches it and stops, but it's also a bad thing because it means that you're no longer automatically upgrading and you have a system that might be in a state that you don't understand. Improved documentations and tighter integrations with OKD, blah, blah, blah. So yeah, that's kind of a short spiel on Fedora CoreOS and let's see, oh, huh, oh, huh, I see. All right, so let me explain this. So the install actually finished, so that's good. This error down here is cause for some reason, I guess I pressed the up arrow key and enter at some point. So it started to try to run the script again and it got an error because the VPC already existed with that same name, so just ignore that bottom part. So our cluster is up and running. If we wanna go look- So question here. Yep, go ahead. When you're with the output, it says that the time elapsed for actually doing the installation with eight minutes of 25 seconds. I feel like that's not even close to how long that was. No, so that is, so that is from the time that, I think that was from the time from this point, right? Okay, so when the OpenShift console root is being, the OpenShift console operators are being spun up, it took eight minutes and 25 seconds. Right, yeah, so like, yeah, we waited a long time for the ingress controller to be created and then it popped in back into OpenShift installer waiting for it to initialize. And then the next state was waiting for 10 minutes for the route to be created. And then the install was complete and yeah, it took eight minutes for that. So yeah, these times are a little misleading. Maybe I should wrap the entire script into some sort of time called that basically says how long the entire thing took. That probably would be a good idea just so people have a better approximation of how much time this thing actually takes. Right, okay, so real quick, I'll copy that. And I know I'm running out of time a little bit, so I just don't want to. That's okay, take your time. It's fine, don't worry about it. Like we started, what, 15 minutes late anyway, so who cares? Well, the next speaker cares, but that's all right. Is somebody literally starting at noon? Because, wow. Zvon goes up next with GPU. So here we are into the cluster. And obviously, I think Char gave quite an overview of everything that you can do in here. Real quick, I want to, I have a small script that basically will add a worker node to the cluster. And this script, I'm planning to just add it into a Digital Ocean OKD install and make it a sub-command or something like that. But this script itself will more or less just run a DOCTL compute droplet create and give it a name and then appropriate tags. And it will pretty much just join itself to the cluster. So let me run OKD worker, did I use, what, dash, okay, dash two. So basically what's happening now is a new droplet will get created, this one, and it will join itself to the cluster, which is kind of nice. So the way that I set things up with tags means that if an instance has a certain tag, which I guess that's a bad example because that's just the control plane nodes. If the instances have a certain tag, then they'll automatically get added to things that apply to them. So OKD worker two is automatically a part of this firewall now because it has that tag in it. So that's kind of nice. It makes joining nodes to the cluster a lot easier, I think. Really only have to run, oh, oh yeah, sorry. I ran that with dash, set dash x. So that's why it's so confusing. But yeah, you really only have to run the compute create command, droplet create command in order to join a node. So that's kind of nice. And then, oh actually, before that node can join, once it actually comes up, we'll have a certificate request that comes in that we'll need to approve. So I plan to automate that as well. I just haven't done that just yet. So Dusty, a question showed up in the chat that I think is important enough for you to answer in recording. Is it possible to auto scale OKD in digital ocean? And if not, when will it be? So I don't think it's possible. It's definitely not possible as part of what I'm doing here. But part of the reason that I went through and automated this is cause it's just not part of like OpenShift install, right? OpenShift install itself knows how to talk to various clouds. So it knows how to talk to AWS. It knows how to talk to GCP and things like that. And that also means that there's a machine API operator that also knows how to talk to AWS and GCP and whatnot and bring up new nodes in the cluster if you have set up auto scaling, I think. So... Mike McCune actually did a demo of the Otterscaler for us a few weeks ago in one of our working group meetings. I don't know if it's in four or five fully functional. He might have been doing it off of four or six, but I have actually seen it alive. Nice, yeah, oh yeah, and for, yeah. So for auto scaling itself, I haven't personally used that but I've definitely done the thing where I've got a cluster up in GCP and I just go and edit the number of replicas in a machine set and it spins me up a node, right? And it just does that for you, which is really darn cool. But yeah, I think the moral of the story is or the answer is the question is no. As far, I mean, there might be Terraform modules for DigitalOcean, I'm sure there are, but none of that exists in OpenShift and we don't have DigitalOcean as a supported platform in OpenShift, but if DigitalOcean was a supported platform in OpenShift, maybe if it was just a supported platform in OKD, not necessarily OCP, the product, that would pretty much obsolete everything I've done with this script. More or less, I was just trying to scratch my own itch with this and it started as a short bash script and ended up as a bigger bash script, but yeah. Automated UPI, that's what you've done. Right, yeah, and the only reason I was able to automate UPI is because DigitalOcean has an API that I can use, right? Yeah, because if that didn't exist, you'd be toast. This is the bare metal workflow automated because DigitalOcean has an API, right? So that's it, okay, so I've got a pending CSR right now. So I'm gonna go approve that. Wait, yeah, yeah. So I'm gonna approve that one and then there should be another one that comes in here in just a second that I will approve as well. And then that node will be able to join the cluster. Okay, so now if I go look over here, that node is joining the cluster. It's not quite ready yet, but eventually it'll have some odds on it that, yeah. But these haven't come up yet, but eventually it'll have some pods on it and it'll be able to schedule stuff. And let's see, where, yeah, okay. And the other thing too is I installed an older version, but as soon as I log into the console, it tells me a new cluster version is available and if I get cluster version, it tells me what I'm at by OCADM upgrade. It tells me what is available that I can upgrade to, but I won't do that because that takes a long time and I don't have time. But yeah, so that was the demo. Do we have any other questions? I think that kind of covered all of the questions that I can see coming in from other places too. So I think you've done an awesome job covering off everything. And we look forward to continuing to collaborate with you guys and seeing more digital ocean folks coming on board and testing this out and giving us feedback on your scripts and everything else. So I'm gonna queue up our next speaker. Ivanko is going to demo deploying and configuring NVIDIA GPUs for OKD4 on AWS. So I'm gonna pause the recording here for a minute and then I will upload this video.