 To the start of a wonderful okd deployment marathon We've got a number of folks from the okd, which is the open shifts community distribution of red hat Red hats open shift and we are going to try today to Demo on as many platforms as we can coerce our members into doing so all day long If you don't know okd You can pop over to the okd Dot I owe website and read all about it We've been working diligently on it and have recently just done our GA release for Okd 4 and it is out there in the wild and that's what we're going to be demoing today. So To kick us off We decided to start with AWS and the cheapest AWS cluster possible and a full AWS deployment to follow on that by and Christian Glombeck who is the co-chair of the working group is going to set us up and Show us show off the first bits here and we're going to drive through today You know putting a general surgeon's warning on the day It will be fluid because these demos are live And we may sneak in a few more in between things if people go short And we're going to try and do the Q&A during the cluster up all the clusters are booting up Because there is a lag and there's really not much to see on the screen. So if you're watching You can ask questions in the chat wherever you are whether you're in one of the live streams or in the blue jeans itself Without any further ado, I'm going to let Christian take over the screen here and Share his screen and walk us through the cheapest AWS Deployment we could figure out so I'm going to share your screen There we go Doesn't work. Okay. Perfect So Can you see? Can you see it? All right. Is it big enough? It is big enough and I'm going to take my video off and let you rock and roll Okay, so in order to get a minimal install going We really just have to change a few Few parts of the install config. So usually what you do is OpenShift install create install config I've already prepared that so I'm not going to run it again. This is where you're going to be asked for your Essentially your AWS or your account credentials and where you want to install. This is the installer provision infrastructure Instead, I'm going to just go you what after the install config YAML is created with that command I you just go ahead and edit it a little bit. So what we're going to do here is The number of worker replicas is scaled down to zero And the master replicas are just one now. I've also changed the type of the AWS Node to m5x large Just to be sure we get that we need quite a bit of RAM still for the install So 12 gigabits for for installing and then when it's running, it's going to be six. You could additionally also put in a Fewer a smaller node for worker nodes if you want to scale up worker nodes right now And we're just going to install the one master cluster right now. This is kind of it So after the open shift install create install config We're going to do open shift install create manifest and after that we can go ahead You can actually if you don't want to if you don't want to edit anything and just take the defaults It's not going to be the smallest cluster cheapest one You would just Do this command? Right off the bat We're because we've wanted to edit the install config. I Have to I had to do them in in sequence here But create cluster will also if there is no install config and no manifests present. It'll it'll create them So open shift install create cluster we can start now So really what I've done is just scale down the replicas to the master node the control plane nodes to one and No worker nodes at all Then I think what I forgot here because I had to change laptops I should have also put the The ingress controller on the master node. So we're probably not going to have Ingress right now. I'm just noticing So usually you have to change the label as well for the for the ingress controller to run on the usually runs on the info nodes, which is scheduled on a worker. I Should have changed that to master. Anyways, let's see where we where we get here with this So that's an additional step For the ingress controller to change the label to point to master Well, and yeah, now we have to wait a little bit the install is running. I Think you can go back to this Then is this already the the time for Q&A now What someone was asking what the O stands for in okd and I think that's it's interesting the O stands for nothing and when we Shifted from Kubernetes to to do Kubernetes from the older version of OpenShift, which was a Ruby on Rails MongoDB platform as a service and we shifted over to being on Kubernetes We had to rename the project to be more in line with other okd other Kubernetes distributions and legal marketing and trademark issues made us have to use a three-letter acronym much like EKS or Okay, you know OCP and other other acronyms that are out there for Kubernetes distributions So the O actually doesn't stand for anything not even origin I like to joke that it stands for okay, Diane because everybody agrees with me on that But it took a lot to figure out that acronym and that's where we have our okd Panda and everything else So a technical question for you, Nirla is asking do we need to generate ignition files for the install? So I'm gonna I'm gonna take this So this is what the installer does for you In the install in the second step. I just ran the OpenShift install create manifest It'll generate all the ignition files within machine config We're net is objects But then if you don't because if you run it step by step you still have the the opportunity to change those If you do want to provide your custom Ignition then you can you can add it degenerate or add to the generated It's a ignition config that is output by the create manifests There is also the question curious if install could be done with spot instance Yes, I think you can you can because I didn't because I Don't install any worker nodes right now if I had changed that to To the spot instance type for the for the worker nodes. I could later scale up workers that would be spot instance So that is possible as well. I'm not sure how that would work on a master node To be honest, but it's definitely possible with workers All right, and there was one other one and this is the question that everybody always asks were those steps documented anywhere yet So we have a draft document right now and there's a bit of documentation floating around as well This was originally done by Vadim Rutkos who can't be here today So I will Definitely together with him submit a document to the okd repository. So we'll have that properly documented as well Maybe just to know this is really just for testing purposes. It's not an ha cluster. You won't be able to upgrade easily Because the at city quorum will not be kept So Yeah, we'll still a documented, but it's not it's just a testing set up for if you want to really do some Cheap tests on on AWS here Not supported for any workloads really And Lee Schwank is asking is there a comparison chart table of the capabilities okd ocp and ok e and Not what is ok e again. It is the open shift Kubernetes engine I think that's what that one is supposed to stand for And it is another variation of the product that is simply the Kubernetes pieces of it That you can get support for and all that and I will look for that on on the corporate website I have not seen that and I have not prepared one on that side yet Or been asked to before so I will see if I can find one that's at least an ocp and ok e and Share that with it with the group Shortly, I think there is an ok e versus ocp one, but I'm not sure about okd and there shouldn't be too much Difference between ocp and okd except for and maybe you want to talk a little bit about Fedora Coro s Yeah, sure. I can I can do that. So That's actually the main difference. I'm not sure what the difference between ocp and ok e here is in particular but ocp and okd They're essentially the same cluster code. The only real difference we have is we use Fedora Coro s instead of rail Coro as a base operating system We manage the the operating system updates the same way we do through the cluster So there is no it's one life cycle the cluster and the base operating system. So if you update Okd or ocp, you'll also get a an OS update So the boots will automatic the nodes will automatically pull down the new image Lay it onto disk and reboot into that Obviously in a safe fashion one after the other so if there's any blockers one note doesn't come up or something The update will fail. But yeah, we have this the Coro s technology which is essentially Today a fusion of rpm os tree and ignition rpm os trees are image based Operating system essentially or the creation tool for it so you can compose an operating system image With that and it's going to be immutable and you can update it It's I think Colin Walters the creator of of rpm os tree likes to say it's like a git for OSes so you really have a commit hash that represents your on disk state and then you can upgrade from one to the next atomically and We do that through the cluster and in the case of okd we do it on The base of fedora coro s which you can also use standalone. So fedora coro s ships docker and ships podman engines, so if you just want to run Single container Single node workloads in containers Then that's the right operating system for you. It's really Here towards running containerized workloads Yeah, and even fedora coro s and rel coro s aren't that different It's just the the package sets they use One is the fedora package set and the other one is the rel package set So it's mostly the same packages just you know different versions of different kernel you get the rel kernel in rel coro s you get the fedora coro s And yeah, but mostly it's the same tech running there and In about two hours, I think at 1500 UTC our third demo will be dusty maib who is Sharing the fedora coro s And helping do the community management for that and he's going to demo okd on digital ocean And so if you have more questions about fedora coro s we can pepper dusty at 1500 UTC so in two hours time too, so if you want a deeper dive then come back and join us for that All right, how is your deployment going? It is running still so the boots from API is up by now and Now I can share my screen for a second here that helps Just to let you see what's going on And Ashraf you were asking about Azure Azure is one of our second to last presentation today. So we will have a demo of deploying on Azure The only one that we had to cancel was GCP and that was only because Vadim Wasn't coming and it's not much different than the AWS one Can you see my screen again? I can indeed Okay, perfect. So this is what you're gonna see when you run the create cluster command It's just gonna take a while So first we have to wait for the bootstrap note the Kubernetes API on the bootstrap to come up That has happened now and now we're bootstrapping and That may take up to 40 minutes usually doesn't take that long But yeah, we're about halfway through I think Yeah There's one more question and these are great questions everybody because after this we're gonna grab all of the Questions and turn them into FAQ. So you're helping us Develop our FAQ for okd and one of the questions was can I convert an okd cluster to an ocp cluster? So, yeah, that's definitely a thing we want to do and it should already be possible really if you force an upgrade to To the okay ocp Release So I've never tried that but it should be technically possible we want to actually test that NCI at some point To make that a good story But yeah, it should work But nobody's tested it so far. I think I also just pasted a link in the chat Which is our working documents here for the cheaper AWS cluster for the one single node. There's also a few more tricks in there Like yeah running the infra on the masternode and also Setting up spot instances for workers Now one other thing What would add is if you're interested in this because as you can see this is a working group and we work If you'd like to join the working group I've just put the link to groups google.com or the okd working group Or you can go to okd.io and find a link there but if you have questions that we don't answer or You want to work on maybe a migration path from okd to ocp with us We would love to have you come Christian, are we actually hosting a call tomorrow? Working group ice. I think so. Yeah, I think so. I think there's more Yeah, there is one. I know it's coupon week But we did not cancel because the work never ends and we'd love you to join and and help out Please feel free to sign up for that mailing list and that group I'll also quickly paste the link for the Pedal Cal for the Fedora calendar where we have our okd meeting. So it's please also join the Google group But this is this is a calendar. You can also subscribe to and we'll have all of our meetings on that Frank had another question. Do you need a pull secret to okay to deploy okd? No, you don't you do need a fake pull secret though You can Read that in the okd you read me Here and it's really just an JSON struct with with like a fake occasion in it Because we're still using we're very close to the to the cluster code To the ocp code also in the installer the installer is actually the only part right now that isn't Exactly upstream. So we've had to fork it a little bit we're gonna Remerge those Dune I hope and it's not too different anyways, but that is one of the things we we didn't want to pull Pull out entirely because it's definitely necessary for the ocp product and for okd We don't really need it, but we haven't found a super nice solution to dealing with that So if you don't have a pull secret and don't want to create one With red hats you can use a fake one and Then you won't be reporting telemetry data to red hat if you use a red hat Pull secret then you'll be providing telemetry data That collects it Which would be awesome because we would love the okd's to have some okd miss show up in that as well So just personally I'd love to see that show up on some of that But it's because one of the things with an open-source project. There is no gatekeeping on okd at all So there we know there's a ton of deployments out there. There's a lot of interest in it, but other than the working group and People asking us questions in either the slack channel or on Different tech support things. We don't really know a lot about how okd is used in the wild And so later I'll share a survey and if you are using okd or planning to I will have you Fill that survey out and I'll share that with all the videos as well The other thing someone had asked about was the release cycle the difference between ocp's release cycle and okd's Release cycle you want to talk a little bit about that? Yeah, sure. So we don't Yeah, we don't adhere to the ocp release cycle at all really we just Kind of wait for ocp to become stable to go to the next Minor version like the the switch from 4.5 to 4.6 We'll be doing at the same time as ocp We won't be going ahead and using the master before ocp hasn't tested it out enough to say it's stable as well But we do releases roughly every two weeks from the current stable branch, which is 4.5 and we do that in a way where we do do that in the alternating weeks in which Fedora CoroS does its releases so they do bi-weekly releases as well and then we kind of have one week so period to see whether the new Fedora CoroS works well and then we'll one week after that will release the new okd on top of that new Fedora CoroS So roughly two weeks. It's roughly two weeks for Fedora CoroS. So it's also roughly two weeks for okd All right So this is and here's one more let's just come in With a default cluster monitoring in 311, there is no way to modify the alert rules within Prometheus Is this something that will be possible in four point? releases I am not an expert on the monitoring side So I don't know the answer to that question unfortunately. I Will find someone who can answer that for you Steve and so if you want to Hang out for a little bit. Do you know that answer Charles by any chance? I can I can venture an uneducated answer the solution in in the three dot Was to provision an additional monitoring infrastructure for your applications the what was provisioned with the cluster was Intended to be cluster specific and so it was it was never meant to be modifiable in four dots I believe it's a similar situation, but there's an operator for that and and that you can You can provision additional rules with within the the configuration of that operator I know when you stand up a four dot X out of the box You can start creating Additional alerting rules. I don't know if you can create them for applications or if it's still Infrastructure-specific, but but there is an operator that that you can deploy that that will enable that and with that said I'm prepared to be wrong Thanks, Charles and Charles one of our Long-standing working group members and he is a new red-hatter. So Welcome welcome to the fold. Yeah, I'll be playing the newbie card for at least a month. Maybe two Maybe two. I'll let you get away with two except you've been deploying OpenShift for a long time So I'm not sure you can do that much He's actually gotten Open okay D running with chase so if we get us some spare cycles today I may make him demo that as well. So watch out for that So, let's see Peppin is asking There's a github issue. He's referring to in the chat Yeah, so I think that is just the fake pull Yeah, pull secret That you can use it's the you essentially need a a json With the offs a field and then just anything in it like at least one Called fake here could be any name of then another with Field with an off field in it and then anything in there. It's not going to be checked I can call it fake news or something I'll be fine. So how is your cluster up the bill one? So it's still running. It's hasn't reported back to me It still says waiting up to 40 minutes for bootstrapping to complete But as soon as that changes, I will definitely let you know So well, we will fill the time and this is going to be the interesting thing the whole day is because basically I told everybody to try and do live deployments And so filling that time while we're waiting for open shift and okay D to do their thing Will be interesting. So if you have more questions, please do ask them in the interim What we'll probably keep doing is I'll keep asking Christian and Sarah and unfortunately Vadim Prescott has is not available to to play with us today But there's a lot of other folks on the line who will be coming on today with different levels of expertise and different platforms Yeah, there you go destroying that destroying bootstrap resources During its creases is what which you're going to put on the sound. So, yeah, it's That usually happens when it's successful, so let's see whether we will have something soon To look into here Perhaps you can tell us a little bit maybe Christian what you're working on now for the next release of Of okay D Oh, yeah, sure. So For the next release of okay D we will really have Essentially we will converge much closer to to OCP or even closer to OCP right now We still have two repositories forked in the Well one in the payload you're running for 4.5 with With the next version 4.6 OCP will have moved to ignition spec 3 and okay D's been using ignition spec 3 For its entire time. So we really we're really closing a gap here with the machine config operator and also on the on the installer side So we'll be much closer to to the actual product essentially Okay, he's been using ignition spec 3 all the time as I said and For OCP we were using spec 2 and the migration Had turned out a little bit finicky. So so that took its time But now we're finally able to to just move everything to spec 3 So yeah, that's gonna be good And it's also gonna be less of an effort to maintain I think and that's introduced quite a few Bugs we've seen in the past that we because we didn't have this rigorous the rigorosity of checking on our forks is just not that big But yeah with that moving to be essentially the exactly the same code We're not gonna have a problem there on the machine config operator side and then I think we will we're not gonna be able to Merge the installers in the next version, but hopefully the one after that And another thing we're gonna focus on is really bringing all the operators from From the operator hub and all the operators that are supported on red hat on on OCP Bring a community version of those Into okd Not by default of course, but kind of have them in a operator hub catalog and have them installable on demand That I think that'll be the next big community effort that we've been talking about in the working group meetings Is to review those prioritize them and start working pecking away at that list of things So there's a couple other questions coming in and I know everybody keeps asking me for a comparison matrix between okd OCP and ok and yes, we will get one somewhere I think I saw one for OCP versus ok so just repeating myself from earlier and I will endeavor to see if I can pull that one out of Somewhere in corporate marketing, but I haven't done one comparing okd yet. No, I'll get that on that James is asking a question which is probably gonna get asked a lot today is what's causing the slowdown in the And what could be done to make the deployment faster? Yeah, so that is a big issue and because we're installing quite a lot, so it's not OpenShift is not a minimal cluster per se. We have a lot of operators Just a lot of resources we apply to the cluster that you know help manage itself essentially Which makes it so stable, but it's also kind of big because of that and just there is some So, yeah, this seems to be a problem right now with the Well, it may still it may actually still come up It's gonna try for some time now to to grab the API So, yeah, we're working on that. It's a long-term goal within all of OpenShift not just okd also the product Yeah, but because we have the demand of really being secure and Having a big feature set by default Yeah, it's not really super close by now to to minimize that footprint. It's a long-term thing And that is yeah That is I think what I can say to that There's one question and I don't really know the answer to it Personally, how and when does red hat engineering use okd versus OCP and non-production or production and and right now You're on the red hat engineering team. Is there any okd use inside there other than testing? I don't think we run any services on okd right now We do have Something planned in collaboration with with the fedora community to have a cluster there I mean internally at red hat. There is we have our SRE team that manages our own clusters and and the customer clusters that are managed and Those are all OCP. So I'm yeah, I don't think we we have that Right now, but with the fedora community We will have at least one quite big cluster at some point in the future for things to test out But I myself don't have a lot of ops experience to be honest. I usually just Develop and write the code and then we have this great CI system that really tests out everything But yeah, I Think if there's and especially with the with the plan to test these okd to OCP upgrades kind of Upgrading into a subscription We will maybe see more okd usage within red hat as well No Paris is asking The question that everybody every ass as well as what is the status of okd ready containers And maybe char that was kind of the inference with Charles Che demo, but maybe charo if you've got some insights It's the if the question was specific to code ready containers effectively for okd sort of the Minimal single node cluster that you just download and run I know there is work progressing toward that, but I don't know what the current state of it is I actually I haven't looked in a while But I know the Provene one of the the leads on code ready containers was actually working on something so that the same thing would work with Okd Yeah We'll see if we can get a status out of Praveen, but What we've sort of the work around is has been this simple single cluster installs and Some of the work that hopefully we'll get some time for charo to talk about using chae with With okd Let's see We're gonna I think just to add to that real quick. I think we we've had a Proof of concept for a code ready containers on the base of okd It's just not I don't think it's really been decided by the team that actually does that to deliver that continuously Maybe we should push on that a little bit as well But because it it is not that different, especially if you run it On a laptop, I know why people I can see why people would want it But for the CRC team which has limited resources, I think it may be difficult to deliver on that right now Even though yeah, I think we should still follow on up on that It's not really a thing that exists right now. There has been one one testing release, but it's not Like they do that for all our releases. Yeah We also have a proof of concept for actually upgrading a single node cluster as well Which has been one of the limitations that if you if you download and run CRC It effectively has a limited life and you need to pull another image of it But hopefully we'll we'll be getting some progress on on those things in the future so that it feels more like The the mini shift experience that that people were probably used to But going back to the previous point about the enterprise class of open shift We need to strip it down a bit more to get it to a mini shift like state It it comes with a lot of enterprise features that that were left out of mini shift And we do know that's a holy grail So and they are all part of okd by the way those features we don't strip out anything for okd So Mike Rochefort is asking if the CRC life-stem stems from the Renewals, I think it stems from the fact that you cannot upgrade the single node cluster So it'll you'll just have an outdated version at some point and you can't migrate to the next version with the With the way CRC is right now see what other questions there might be here Seeing any coming in Mike also says he remembers early on a new image being pushed about every 30 days Or is that for CRC? I think we Yeah For people to continue to use it. Yeah Yeah, I think CRC tries to get one release out per OCP release right now And they obviously don't do as many releases as we do So Yeah, having all of okd's releases also done a CRC may be a little too much for that team. I don't think it's too big But yeah, we should definitely get to a point where we can just get side-by-side releases off Based on okd and OCP All right checking back in on that deployment is it going any faster now Do we need to buy you new hardware? Yeah, well, it's it's not running on my Laptop, right? It's I'm just polling the API here, but it's not up yet So, yeah, let's hope let's hope it'll it'll come up Still have around 20 minutes for it to finish I guess or even more it said wait up to 40 minutes So after that it'll it'll just cancel the thing but yeah So Mike is asking sort of what the purpose is of today and Whether or not it's just a day-long prep for kubcon and it's we normally we do an Openshift Commons gathering the day before kubcon and Because kubcon changed its date so many times virtually I decided that I wasn't going to try and keep up with them and we're in scheduling and And because okd did a GA release then We will We decided that we were going to celebrate by forcing everybody on the working group to do a demo of their favorite platforms or whatever They had access to hardware or clouds to do for a day-long thing one to capture some of the the videos and the how-to bits of it for our our website and for our YouTube playlists But also really to give a little bit more build a little bit more awareness of okd out there in the universe It's not the most well-known kubernetes distribution at the moment, but hopefully we'll get there The Openshift is pretty damn popular these days so and Someone's asking you should have used a more powerful AWS flavor Would not have been cheap you're right But that is going to be the the next demo which Christian is going to do which is going to go over again Which is why the whole day we say is very fluid Yeah, we can actually I mean it's not going to be that different because The only thing I'll just leave out the editing of the install config, but other than that. It's really the same So maybe maybe we can maybe we can drop it. I don't know I can do it again, of course But we'll have another half an hour wait period Let's let's get through the cheapest and then maybe with the AWS One you can just leave that running and we'll come back to it sometime after Charo starts But and that will be a good way to segue into what Charo is going to do next That sounds great We'll get one done the cheapest and That way because in the working group we tend to have the open source folks who don't have the biggest budgets and Or or the most permission to use their hardware, but I think that the goal is to give someone Lots of alternatives and ways of doing things Let's see how about if you share your screen one more time and let's see where you're at So it's still just pulling the API to see whether it comes up It'll do that for up to 40 minutes Or actually 30 minutes in at this stage And this is yeah, it has bootstrapped and now it's just waiting for For the one control plane node we have to come up That'll Do a few few reboots because it'll pull down the current or the Fedora core OS version we've referenced here in our payload and Pivot into it's called pivoting with the way we do it. It's the rpm os tree image will be Or the rps rpm os tree comet will be delivered to the cluster to the node in a container And then we have a binary the machine config daemon Which has a command pivot and that'll unpack that rpm os tree commit from the container Put it onto disk and reboot into it. And this is Yeah, this can take a little while So we have yeah, I don't think we although we're getting close to the 30 minutes now But yeah, it soon it'll either say it failed or timed out Then it'll roll then it'll destroy the resources here or it'll say success and give me Give me the domain to log into Unfortunately not a lot to see while this is going on. It'll just it's just trying to yeah get on to Onto the API So in terms of faster deployments How does this compare to like a vanilla kubernetes like how have you tried? I'm just curious here because I know there's a lot of extras in OpenShift so yeah It's I mean, it's only one time thing the install right when you scale up notes after that That is much quicker But our rollout our initial install does take does take longer than than just a vanilla kubernetes I think Yeah, that may be up in about I don't know maybe 10 minutes and we're you know taken half an hour 40 minutes for it So we do have a lot of You know space to for improvement here that is that and it's definitely a thing our customers for the product also want It's just because it's only you know once at the very beginning It is not that super important Obviously, it is a little bit annoying especially if you do presentations like this Waiting that long and yeah, we're on it And and you know Mike you is making a great suggestion the installer needs to be more exciting and engaging And I think someone to take up the challenge of doing an ASCII version of the panda and inserting it into the installer Somewhere that's I think would be That'll be the next one some ASCII art in there Yeah And we're here Patience pays off So, yeah, and let's see how this turns out So it is it is an HTTPS Scripted connection, but it is self-signed so you'll have to Accept this risk and continue and ta-da. Here we are So this is our one note Minimal cluster and yes cluster has overcommitted CPU resource requests for port and cannot tolerate node failure May have chosen a an instance type a little bit too small Although yeah, it came up pretty happy this worked nice. So, yeah, we have one node as you can see It's everything. It's master and worker at the same time. So you'll schedule your workloads on that as well You probably won't be able to schedule a lot as it's already running Yeah Running full a little bit, but Yeah Can you well while we're here for a second and just a second go go into the operators and show which operators are running in this minimal configuration So, yeah, and we don't have any operators really Installed from the operator hub. I think Okay. Yeah, we do have one which is the operator lifecycle manager Which is actually the operator that manages the other operators from the operator hub So we only install core operators But even the core operators bring why the big set of functionality here I Think in the future the way we will minimize this is to split out a few a few more of those operators into operators on On the operator hub so they can kind of be installed on demand, but they aren't necessarily always included And on the operator hub These are the operators you can Click and install Not all of them are tested yet Or actually not many of them are tested yet But this is kind of the next big endeavor of the working group of the okd working group to to ensure we have a broad Set of operators here in the operator hub that run on okd and are tested on okd regularly And there's two more questions Coming in and but I get to press the easy button There we go, that was our first demo it ran and you got into okd well done couple more questions, but well if you want to destroy that and bring up Set up your other one Now that we've done that the full one Frank yes, is there a link from which I can download the ca bundle for this cluster to import it into my browser The ca bundle for this cluster So I don't understand to be honest. Yeah, I are right you well So you don't have to click the To accept the risk and continue one so you could Use a ca that is signed by by an authority that is accepted by the browsers and then you wouldn't have That problem as this is a self-signed one You yeah, you just have to trust it yourself There is documentation about that in the in the okd and ocp docs So I will quickly stop sharing and set up my full environments here we could actually Yeah, the what he's looking for is probably that might be under the On in the generated files from the installer might be somewhere under there He could actually use to import into his browser, but yeah Yeah, I guess that is probably the case that you have to change something in your install config or in the generated manifests So this time Really, I'm just running a one command here open shift install create cluster right away. There's no install config Prepared here that's gonna generate it's gonna include the two commands that I just ran separately the Create install config and create manifests. It'll do all that for you So you just have to put in your SSH public key the platform you want to install the region Then you'll get a base domain that is Your your AWS account owns that essentially and then you have to choose a cluster name. Let's Just try okd pull Do that now I have to go Copy the do we still have it in the chat the fake offs thing? Let me try that Here we go Nice Well, did this not work? I have copied The space or something Oh, no, actually that's running. Okay, perfect So, yeah, and that's essentially all you have to do to start that install process Right now. Okay. There you go. Perfect Yeah and now we Are at the same point again where we just have to wait and We have time now to answer questions again, all right so why don't we do that and Charo, I know you're up next with the bare metal Which always sounds to me like a heavy metal band kind of deployment And I saw the guitars behind you So it might be appropriate if we pause now and let the AWS lives thing go And let Charo queue up for his Deployment and share his screen All right, very much there Christian for hanging out with us And I hope you can spend some more time today Because I'm sure we'll be repeating some of these questions. Yeah, sure. I'll be here. I'll be here Thanks. All right. Do you see a whole bunch of open terminal windows? I Do and I smile smiling face and I'm going to turn my smiling face up Why don't you introduce yourself and what you're going to demo now? Okay, I'm Charo griever. I am a new architect for red hat services here in the Conferencing system Enter your confidence security code followed by the town sign That is pause for a second everyone and figure out who is doing something odd here with sound Like it's a nerlet Yes Looking for him and I'm just muting in there you go. All right, so start that again. All right, carrying on Well, like Diane has said a couple of times. These are live demos. So We're fully expecting a bill gates moment. It might not be a blue screen, but we might see a stack trace of death And all kinds of other interruptions, but I'm char a griever Like I said, I've been with red hat for for one week but I've been a consumer of red hat products both upstream and Subscription based for most of my 20 year career in it So this is kind of the the dream job that I never knew I always wanted and today what I'm going to demonstrate for you guys is a deployment of a bare metal Kubernetes cluster using okd This is going to be simulated bare metal In that I'm actually using live vert to to run the machines So that one so that you guys can actually see what's going on right because it'd be hard to get you console views to bare metal machines In in this current configuration This is a user provision Infrastructure deployment. So the installer is not going to be provisioning the machines for us These machines are already provisioned if you see in this terminal right here. I've given you sort of a Verge list view of the machines that are currently provisioned You can see we've got a bootstrap node that is not running. We've got three master nodes and We will have three worker nodes and throughout this install I'm going to guide you through the process of Deploying the cluster first through the bootstrap process and then we're going to add the three worker nodes to that cluster Now I'm using virtual BMC Which is a tool that comes out of the open stack world to Simulate the IPMI management of these virtual bare metal machines and these machines are going to boot into Ipixi and Using the MAC address of the machine as it boots. It's going to pull the appropriate Ipixi boot configuration file that sets its kernel parameters sets the fedora co-op core OS install URL and the ignition file that it's going to use to to start from I'm using fixed IPs for this particular lab setup so everything is already provisioned in DNS and I'm using a fedora core OS tool called FCCT to Manipulate the ignition config files to inject the IP configuration into each of the hosts I've got all of this written up in in a Little tutorial that I've got out in my github page, which we can provide a link to But without further ado, we'll go ahead and fire this thing up So the first thing I'm going to do over here in the left terminal is I'm going to power on the bootstrap node And I'm going to attach to its console And what we're going to watch here, it's going to do an Ipixi boot the it's a chained boot So it first pulls It just a boot.ipixi file is what's being served up by the the DHCP server for it to pull from TFTP That then chains it to look for a file that is named after its MAC address It pulls that file you see it got its Kernel and its initial RAM disk The kernel parameters that were passed to it Gave it its instructions for installing fedora core OS and you can see right now It's actually pulling that fcos image across Now we've got an HA proxy load balancer. It's this guy right here. Okay d4 lbo one that is already running and is Configured to sit in front of this new cluster as it comes up This will take a little bit With the scrolling logs. It's pulling like I said, it's pulling down the image One other thing I'll point out while we're waiting for the bootstrap node to to complete its install is that we're also doing a mirrored install today Which hopefully makes this go a little bit faster than pulling all of the images across the wire what I have is a local instance of a sonotype nexus that I Have mirrored all of the images into if you can see this eye chart and So the install is actually going to pull its images from the sonotype nexus right now I've got quay.io and a DNS sinkhole so that it can't it can't resolve and Because it can't resolve it's going to assume it's an air-gapped installation and it will pull from the the configured mirror image All right fedora core OS is booting now It's going to overlay the rpm OS tree and When it finishes it will boot one more time and it will start the bootstrap which we will watch right here Okay, so it just finished the OS tree overlay and now it's coming back up When it completes booting and begin the bootstrap and fire up the master nodes So I'm just running a little Script here that's going to do an IPMI tool command against those three master nodes and Start them up and the fans on my little intel nooks just lit up hot and in the top right corner here, I'm going to run the open shift install command and Direct it to monitor the bootstrap process and if you if you do this at home and you monitor the logs like this don't be alarmed by these failed failed failed Entries that you see coming out in the logs This this is the bootstrap process waiting for its resources to go live and so it will continue to loop Until the various resources come up and you can see the API just came up So so our API is now live and we're waiting for the bootstrap process to complete down here in the bottom right hand corner We're just tailing the journal control logs of The bootstrap process itself This this all in takes about 10 minutes from the bootstrap node firing up to the bootstrap process itself completing the the installation itself will complete after about another 25 minutes, so we've got some time now to Take some questions if folks want James Kassel is asking from twitch is The single necessary to use mirror. I Think it still is I know it has been for a while that if you don't create the sinkhole and it can resolve the external Host it will pull the images from the from quay.io And that that's why I that's why I created the sinkhole to simulate a disconnected install where we're on behind Bunch of firewalls and proxies that prevent my nodes from having direct internet access a couple of questions just to double check The link to the documentation on this is the same as the stuff that you did in the okd for UPI lab setup Yes, yes, there's a there's a new branch called Ipixie That when we're done today I've got a little more cleanup on the documentation to do but I'm going to merge that branch into master the the old Tutorial that was the centos 7 based one I've branched master to a centos 7 branch So anybody that's still running centos 7 would want to use the centos 7 branch I've upgraded my entire lab to centos 8 and have enabled Ipixie even for the for the hardware for the bare metal itself so that so that just by creating a An Ipixie boot file with the MAC address of you know a new piece of metal All I have to do is plug it into the network click the power button and it will provision itself with whatever personality I wanted to have Just checking the other feeds here. The other feeds are a nanosecond behind us in blue jeans, so I'm trying to do there and Brian Jacob Hepworth is saying that he really likes the Fedora Coro s news and seeing that Who knows so is this going to take us another 20 minutes or 30 minutes here? Well as soon as the bootstrap completes then will be about 23 minutes out from completion The bootstrap usually takes about 10 minutes in this environment Okay, I'm going to do another pitch for people to join the Okd working group while we are waiting here because that's what I'm Charged with is getting more folks in so if you're liking what you're seeing here or if there's features missing Or other platforms that we should be demoing to or testing on or that you're using okd on or wishing to do so Please join the okd working group. The mailing list is here. I just put it in the chat and It is in open Google group And we have a lot of meetings every but we meet bi-weekly And we have a meeting tomorrow and I'll throw the Fedora Coro s and a chef. Thanks for joining us And we will do the Azure one that you requested earlier That is our second-to-last demo. I think today is Azure For the Fedora calendar Link here. All right, the bootstrap is getting close Okay, it Bootstrap has succeeded and it's gonna wait just a little bit longer to send the event and then you'll see okay There it went so the bootstrap is now done. You can see in the middle terminal that we do have three masternodes that are live I'm going to now remove the bootstrap node And I'm going to take it out of the ha proxy configuration as well So that we will forget everything that we know about the bootstrap Now we'll watch the install complete. All right, so we are working towards 4.5.0 okd awesome sauce Now the this is something odd about this install monitor here. It will say 42 percent complete Here in a minute it may barf a couple of errors as Some of the resources restart And it will also reset the clock so it's It plays with you a little bit You'll get up to 74 percent complete and then all of a sudden you'll see 12 percent complete and then it will quickly wind its way back up I'm making a bold assumption here that that is actually the result of It monitoring some of the resources that through this process update themselves And so that percentage of complete becomes a little bit variable So if you if you see that Running this at home. Don't be alarmed. It is actually Working towards completion and you need to be patient because from this point it does take about another 23 minutes 23 minutes Well, you want to talk a little bit while you're doing this about the work you're doing around chay Um, sure. Well, actually it wasn't it turned out not to be much work at all And in fact, if if we end up with enough time um, I can I can deploy a Hyper converged seph instance into this cluster to give us a storage provisioner because that that's really I think I think the folks that might have struggled with getting Eclipse chay up and running is that it does need persistent volumes both for Postgres The it deploys an instance of of postgres to support an instance of key cloak that provides The identity provisioning identity management for your eclipse chay environment But the workspaces themselves also require persistent volumes You can probably make it work with ephemeral volumes. Just understanding that if those pods ever got evicted You lose everything which would be significantly detrimental to your Postgres instance So so it does require that you have some kind of a persistent storage provisioner I had done it in the past in older three dot 11 clusters with ice casing But now with with the seph operator using the rook operator to deploy seph. It's much much easier And something else i'll i'll mention here I'll run this again So you see we've got three master nodes that are running, but they're also designated as worker nodes That's an artifact of how we're provisioning here because the install config that we used Does not designate any worker nodes So the installer by default Makes the masters schedulable Um when the installation is complete, that's something that that we're gonna we're gonna change We'll add the three worker nodes And then we will make the masters unschedulable All right Fernando is asking is it possible to specify a different ignition version during the dine g or dine I'm gonna say that wrong again dot ignition files creation I don't think so. I believe the It's it's not possible. Yeah, we're stuck with one. You should at this at this time You should always be using ignition version 3.1 point zero for everything slight correction Ignition spec version 3.1 point zero I was about to say I'm pretty sure there's more than that the ignition versions don't match the spec version at all Yeah, it's ignition v 2.x with spec v 3.x and our current spec config spec version is 3.1 point zero So for the ignition config Always use the spec version 3.1 at this time We should probably just bump the ignition versions just to make this a lot less confusing Yeah Because there's no particular reason not to as far as i'm aware Just going to introduce that new voice is Neil Gampa from Datto is in the house. So well, hi. Yes I just sort of forgot that I hadn't actually been Introduced so i'll just oh, yeah, I can't why is it saying the camera isn't used by so whatever Anyway, the microphone works figure out why the camera doesn't in a little bit Uh, I'm I'm a DevOps engineer at Datto I'm here as an okay working group member and I'm going to be assisting dusty in a little bit once we Once he and I get to our part of this Okay, de deployment fund and where I will just talk randomly and while while dusty pushes buttons and stuff But uh, yeah Here I'll walk you through a few of the things that that were prepared ahead of time I I said a lot of words to describe it. Um, one of the especially the way I'm doing this with with fixed IP addresses One of the things that you have to provision are dns records a few key dns records You can see I've got in here the provisioning for Several different clusters that I run But this is this is the one that we're presently looking at right here so each of the Um master nodes worker nodes and the etsy d nodes Requires an a record The the master and the etsy d obviously are sharing the same node So so they're going to have a records with the with the same IP address you also need three server records for the etsy d And then you need a pointer record for reverse lookup for each of the of the physical nodes So your masters and your worker nodes you'll need pointer records for those But the as you can see the dns setup is not onerous Um, but it is necessary here. I'll show you what I'm using an open wrt router. Um, it's actually a travel router to Actually provide my dhcp and ipixie capabilities so the The boot dot ipixie As you can see it's very simple Um, I'm echoing some information just to make sure the right host booted and then chaining in an ipixie file that is Literally named after the mac address with hyphens replacing the colons There's one of them right here that I believe will be one of the worker nodes And so this right here Um Gives it the kernel parameters necessary to boot tells it. Yes. We want to install coro s Tells it where to install coro s tells it where to get coro s and tells it, um Which ignition file to use And that's really the secret sauce there Not very secret Yes You just kind of told the whole world I did I know all right. I've already published it in my github so In theory at 84 complete, um I expected to reset the clock at least once while it's while it's doing this But this is how do you determine this percentages because like I don't see anything on the screen that would Tell you percentages Oh right here. Can you see the the oh, okay there it is Okay, it helps when you highlighted it. There's a lot of word soup on screen Yes, there is uh, and this is how I keep the install from being boring is give you lots of um journal control and logs to look at Because otherwise there's not a lot to look at No, no So how did you come up with this setup for I mean you're doing the bare metal right so How'd you how'd you come up with it? Oh gosh Because like I I remember that that bare metal is like the least fleshed out deployment method of them all so The fact that you came up with something is impressive all on its own. So that's worth the story. I'm sure Yeah, you know I Back in at the end of 2017 I got addicted to the intel nook machines And you know those little form factor boxes are they're not They're not cheap comparatively but considering the amount of compute that you can pack into one of them For a for a home lab setup. They they are pretty affordable And if you buy the right chipset, um, you can put 64 gigabytes of ram in one of those little suckers So, you know, you get one with a core i7 The newest ones that the 10th generation they've got six cpu's So you've got 12 Vcpu's available and 64 giga ram You you can run quite a bit on them and and my idea was actually get an open shift cluster running on the the nucks And then I stumbled across this thing called nested virtualization with libvert and Well, I don't do open stack. I had a curiosity about it and that's how I came across virtual bmc and and so decided to basically bump it up a level and used libvert virtual machines with virtual bmc to simulate bare metal and then it was just sort of a I want to make this work. So I Powered through Making it work to get bear metal install of okd up and running Submitted a few tickets to the fedora coro s team that they were very very very gracious To help out somebody that didn't know what they were doing I had never you know touched Core os before so so that was quite a bit of a learning experience and thanks for being part of the community Yeah, bestie and those guys were they were incredibly helpful And so it's it's kind of involved From from that point The the latest iteration of it now uses the the fcct tool to inject Some customization into the machines Actually while we're while we're still waiting for that. Oh, they hey quick here. Here's the reset. I was talking about See how we went back to zero percent complete Don't panic I don't know why it resets the clock like this. Maybe somebody in engineering could tell us but it is still progressing I assure you That is very confusing and kind of frightening Uh, actually it looks like it resets after it downloads an update So it probably loses all of its state when it does that Yeah, that that that's my suspicion because it does go through several iterations of updating some operators Yeah, so it's just probably losing its state every time that happens Which is unfortunate and I'm not sure if that makes sense, but the best I got It still works though. That's what yes That's the important part. So don't freak out what it goes from 80 to 90 to zero Yeah, so right here if you guys can if I don't know if this is readable, but but you can get to them I get hub page. So so this could you zoom it up just a little bit just Zoom it up one level There we go, then it's readable Yeah, this is a shell script The that I wrote that actually does the the provisioning of the of the quote unquote bare metal for me and right right here This is A yaml file that gets created where it's injecting the Customizations that I want each of the machines to have so in this case What I'm doing is I'm creating a Basically a rename of the primary nick To nick zero so that it doesn't come up as some funky enp blah blah blah blah blah I want it I want it to be more than predictable. I want it to be predictable and known and so I'm using the MAC address of the machine to Explicitly name that network interconnect device as nick zero In that way, I always know what it's going to be and where it's going to be and then I inject into that it's specific configuration So I'm setting, you know, it's it's name server. It's domain. It's ip address with the net mask and gateway And then I'm also injecting its host name So that it persists its host name There's a bunch of other stuff that the that the script does which is one thing I am going to do I'm going to add Better comments into this so that if any of you are are looking at how this thing is working You'll understand what each of these sections is doing All right. We're back up to 84 percent complete at this point, um I'm going to go ahead and fire up the worker nodes It is safe to do so now. I actually could have done it a while back, but I'm going to go ahead and do it now So I'm sending each of them an ip mi command Given a 10 second pause and in between each one just so they don't Slam my poor little travel router with DHCP and file pull requests at the same time. I'll go ahead and watch one of those guys Dude up There's one of the workers It's going to do the the same thing that you guys saw the bootstrap node doing It's pulling the core os image right now And then it's going to go through the same process Except that it will retrieve its ignition file once it once it processes the initial ignition overlays the The os tree and starts It's processed to join the cluster. It's going to get its It's ignition file from the cluster that will give it the personality of a worker node And if you watch the left hand side of the screen closely, you you should see it Hit a point where it's Waiting on and then you'll see it very quickly pull that ignition config and at that point it will start to join the cluster There it was right there the the start job And there it go it got its ignition and so now it is booting up It's going to ask to be a worker node So just to give you a quick update on the aws cluster. It's still waiting for the cluster api to come up um, I do have to leave now for like 15 minutes 20 minutes um I'll be back after that and I hope my cluster will be Up by then and I'll see you in a little bit. All right. See you in a bit christian our cluster is up And you see awesome It gave us our initial password. So let's go ahead and log in and prove to the world hopefully That this little guy is a lot and as before um self-signed certs So and whatever os and browser you're using you are going to have to accept those certs It's okay. Self-signed certs are fine All right. Now it creates a um temporary cluster administrator for you And that it dumps that password at the end of the install process that you can use to gain access to your cluster And there we are now There will still be some operator updating things going on and your control plane Will still be settling out But at this point we have a live cluster If you will indulge me for a few minutes We'll go ahead and finish adding the worker nodes And then we'll do a couple of housekeeping things on our cluster. So you see we've got some pending Certificate signing request That is also an artifact of the way we're doing this user provision infrastructure install Is that it's not automatically going to approve those certs because it doesn't necessarily trust Anybody that wants to join the cluster So I'm going to approve those certs And there should be another batch of three. They're going to come up pending Yep, and so now We have three worker nodes We're not ready yet. They're still completing their own personal bootstrap and that'll take a Another minute or two for them to come live And I'm going to do a couple of house cleaning things here. One is I'm going to remove the samples operator because it Unless something has changed recent unfortunately christian isn't here. We can ask him later The samples operator because you don't have an official red hat secret at this point It won't be fully functional And can in fact impede Updates to your cluster. So I yank it out I'm not using it anyway at least at this point I'm also going to create a ephemeral storage for the image registry Because it will also be in a removed state because it doesn't have a persistent volume So I'm Hatching its configuration with an empty dur specification For a persistent volume And I'm going to create an image pruner to run it midnight every night because the the console will gripe at you If you don't have an image pruner configured until you do So anything older than 60 minutes it's going to prune At midnight every night or 60 days rather 60 minutes would be um aggressive. Yes Yes I mean I don't know what kind of storage you have but 60 minutes might be appropriate if you basically only have enough For the cluster itself to run And there we are we have Yay cluster okay now huge caveat Our our masters Are still schedulable Our workers Are schedulable That's not bad. Well, it's not but there is a gotcha in here, which of course I never tripped over Your ingress pods will deploy on a schedulable node Well, if um your load balancer Is only configured to look at certain nodes Um here you see I've got my um The port 80 and port 443 and port 6443 they're all directed to the master nodes Well, if those ingress pods got evicted and rescheduled themselves on a Node that is not in your load balancer configuration Then you will lose access to your cluster Important safety tip So so the key the key here is either to span your load balancer Which I don't really want to do because that's a lot of extra craft in the the load balancer configuration Or designate some infrastructure nodes And that's the path that that I chose to take So what I'm going to do real quick is I'm going to designate my master nodes to also Be infrastructure nodes Why doesn't it do that by default? um Because the the the best practice is to create a couple of worker nodes that you set aside as infrastructure nodes Why I don't know Good, okay Just making sure because like I've seen these recommendations listed in the documentation But there doesn't seem to be any particular reasoning to back them up like historically speaking I've seen clusters typically do the masters as infer nodes because That way they handle essentially the stuff that keeps the cluster itself running And the worker nodes are free to out work on Developer user workloads Yeah, I think one of the things you need to consider is how how beefy you make your master nodes You know if you've got heavy heavy heavy ingress operations You know given everything else that the master nodes are doing That that might be a little overwhelming for them in my particular lab environment The the the master nodes are heavyweight enough. Each of them has 30 gig of ram and Six bcp use so so I feel pretty confident Designating them as infer nodes. So what you do once you once you run this label on them Then you need to patch the scheduler So that the master nodes are no longer schedulable. You'll see right now. They are infra master and worker nodes When I run this Now they're just infra and master nodes Now at this point nothing got evicted off of them So if you want to boot things off of them that you don't want running on there anymore You do need to either go through and evict all the pods that are running on each of those nodes manually or Reboot your master nodes, which is a bit more of an aggressive way of doing it Now I'm going to patch the ingress operator To tell it that it's okay for it to run On those master nodes and if you can read the eye chart here, I'll explain what it's doing It's setting a node placement policy Um giving it a match label of infra node It's also that's not enough. You also have to set some tolerations because the master node is now tainted So so you need to give it a toleration that it's okay for it to run with a node that has a taint of no schedule And a taint of master node And so now that that is done You will see the ingress operator Uh one of them is terminating. There's a new one running that is not in a ready state yet As soon as this one is in a running state, the second one will begin terminating Don't panic that your other one sits in a pending state for a while Because it has an anti affinity rule that it won't run On a node that already has an ingress pod running on it So it has to wait for one of those terminating pods to complete Terminating before it will schedule On the master node Wow And so there you go. Now we've got one running. We've got one pending and we've got two terminating And it will remain in that state until one of the terminating pods completes terminating And then the anti affinity rule can be satisfied and the the pending pod will also deploy And these take a while to terminate because they're shedding load that they're they're gracefully shutting down Okay, there you go. So one of them is done terminating. We now have two running ingress pods Um, one of them is in a ready state. One of them is still bootstrapping The last thing I'm going to do Is get rid of that cuba admin account because its password is sitting there in plain text in your installation folder So oh so it does get written down to this somewhere I was I was going to ask are you just do you have to make sure you you save that output text or Will it actually be somewhere where you can get to it? Yeah, if you if you look at the the directory that you used for the installation So there's um, you know, there's the boot the the ignition files that it Created and the metadata it creates an auth directory And in that auth directory it creates an initial cuba config which you can load to give you access To your cluster directly from your command line and it dumps that plain text password right there But if you get rid of the cuba admin user doesn't everything that like links to the cuba admin user break It's a temporary account. So here's what we're going to do. Um, I I created an ht password file ahead of time my tutorial Has instructions for how to do that So so I've got an admin user and a dev user with passwords already in there You saw me just create a secret Right here So I apply I created a secret in the open shift config namespace um Called ht password secret from that file And now I'm going to Apply a custom resource That I've already here. Let me So this is the custom resource that we're going to apply um, it's setting up an ht password identity provider And it's going to link it to that secret That we just created the ht password secret So I will apply that It complains that I used apply instead of create but I'm just in a habit of using apply To update Objects that you can ignore that That complaint there and then the last thing I need to do is this admin user that I I just set up a secret for but Doesn't exist. I'm going to give him Bluster admin writes And now I'm going to be brave and I'm going to delete Well, it also says the admin user doesn't exist That's correct. Um, but it creates it in the background What Yeah, it's not intuitive. No or obvious But it does and it works Okay And so there we go. I just logged in with my new somewhat more secure cluster admin account And you can see our four green check boxes We've got a happy cluster It will complain about alerts until you like set up a slack channel or something to send your alerts to It's actually pretty easy to do you create a receiver and walk through it but I have used up most of my allotted time. So I'll stop playing now and I think the playing is fine No, I'm going to give you that that was easy button All right, well played and um Can you do one more thing for me just um because I think people keep asking me these questions Go back to the console and show, uh, the operators that are installed in your installation Sure, I will do that. All right. So you go to operators operator hub Um, are there no operators found? Those operators don't exist They're installed I think you know, it may still be it may still be updating Well, the operator hub operator might not actually be up yet Yeah, because it does it does take a while after you know that that initial install took us another 23 minutes It does take things a while to settle down. Um Um Let me let me show you what it does look like because I have another cluster that I um stood up this morning Um It seems less healthy Yeah, I think I I think I did something to upset it But here's the here the operators that are available Quite a few you can see there's if you want code ready workspaces the the upstream of it Eclipse chay is in here Do you have enough time to try and install the eclipse j1? um I'm my especially if you don't mind going a couple minutes over because the first thing I need to do is Deploy Oh, actually no, I can because I've already got let let me make sure I've got um Def deployed in this cluster So we're going to go to the Rook sef namespace Yes Yes, we do the fact that the rook stuff namespace kind of indicates you have it set up Well, it doesn't it doesn't it shouldn't exist if you don't have it No, it can exist and I haven't completed the install yet, but well, okay, there's that So we'll go back to operator hub And we'll find the eclipse chay operator And yeah, it's a community operator if I call redhead they're not going to help me with it But if I go on the slack channel, they're usually nice enough Okay, and unless you want to do something different about it you install And we're going to keep the stable It is going to create the eclipse chay namespace And we're going to let it have an automatic strategy for It's approval if you switch that to manual then when the installer installs you you have to go to the installer and then say yes You can actually install That seems Painful Well, if you think about it, you know, I'm doing everything as a cluster administrator So if you're not a cluster administrator, but you you know, you want to request something That's part of what what we've got going on here because there's all kinds of configurable r back capabilities within this thing So when you install this operator as and as a cluster admin Does that mean that anybody who logs in with an account can then instantiate it afterwards? Absolutely. Yes. Absolutely. The workspace is people will be able to get in and create Workspaces again You know, it's got lots of roll roll based Access control so that so that you can control Who can do what? But yes Anybody that you've got created an account in in this cluster should be able to log into chay Create an account in chay, which will provision them into the key cloak instance that it's going to create and then they can create a workspace so Let me switch this real quick to the workloads Okay, our operator is running. It is alive So we should be able to Provision a chay cluster and you see what I did from the from the operator. There's the installed operators the provided apis That's what I clicked on to get to This view here that I can now create a chay cluster It's going to name it eclipse chay unless I tell it to do something else Lots of things you can configure in here. I'm going to take the defaults on everything Uh except storage and this is what I was mentioning earlier that I believe has probably hung some people up is Postgres is going to need a pvc And then any workspace that you provision is also going to need a pvc Which almost requires that you have a dynamic storage provisioner for this to work So I'm going to give it the name of the Storage class and actually I'm going to cancel out of this Go down here to storage Show you that we do in fact have a storage class It's a block provisioner as part of sass And when we create our cluster, I'm going to tell it to use that for postgres And I'm going to tell it to use that for the Workspaces Uh also note each workspace is going to get a gigabyte of provision storage that may or may not be enough depending on the type of development that you're doing That's pretty minimal. So you might want to crank that up to five or 10 gigabytes depending on, you know, how How big the artifacts that are going to be built in the code base and you know everything about the development environments that that you're going to be working with So I'll create create on that switch back to the pod view And you can see it's provisioning um postgres Hopefully our storage provisioner is working And we do in fact have a postgres data That is bound. So our storage provisioner is working Okay, postgres is Running not ready. So it's still it's still deploying itself And this will take this takes a couple of minutes and then key cloak is going to provision itself Um after postgres is done So now key cloak is provisioning and key cloak actually goes through a couple of phases. It has an in a knit phase Um that it that it runs through so you'll see that pod come up and then terminate and and be replaced by another key cloak pod That will be your your final configuration And you won't see the the chai controller Come up until both postgres and key cloak Have completed their provisioning And about how long does that take? Not terribly long couple of minutes Cool It feels like a long time when you're staring at the screen That's all right. I have plenty of coffee today and um michael has just pointed out Maybe there you still have quay.io block via dns And uh, oh You know what I I don't that was a good point out. I snuck that in while neil was talking um, I right here I blasted a command to my dns server to remove the sink holes for quay.io and for Registry dot service dot ci dot open ship dot org I did actually notice which is why I didn't repeat the question that he was saying because Uh, I figured on screen. It was obvious that you got rid of your quay.io block No, I slipped that in and and didn't mention it Well now All right, so we've got um key cloak is is bootstrapping itself now So you'll see You'll see some activity go there. All right, and there it is. So now you see another key cloak instance um provisioning And it will take over from the the first one here In a minute as we all wait with baited breath In other news christian says that his full blown aws cluster has finished installation So when we're done, we'll pop over and let improve that and then We'll we'll grab dusty when he's back and we'll hit up the digital ocean stuff Okay The cloak is running Any of you who are joining us for the digital ocean Um demo, we'll probably get started on that one a few minutes after the hour Um, we're running pretty close to on time, which I think is amazing Indeed and we'll we'll probably lose that thread at some point, but hey And quick plug for my favorite Java framework Quarkis there we go. There's the quarkis ad. Thank you And and and what does that have to do with this? Well once your cluster is up and running you got to run something in it, right? Oh, so you're going to make something with quarkis Okay, so that mad programming skills Yeah, yes indeed So so the the first key cloak instance you see it terminating now So it's getting itself out of the way the plug-in registry is fired up now. You see other activity. There's our chay Um controller Right here that is creating We've got a dev file registry. We've got a plug-in registry And as soon as this guy becomes ready I wish you could hear the fans on my little nooks I wish I had a fan here the temperature was popping up here in Canada on the west coast. It's are we going to hit 32 today? All right, so all of the resources are up. They are all in a ready state We've had no restarts, which is always a good sign Although occasionally a restart is not necessarily a bad thing If we click over here to routes We have a route for chay And if I'm brave and Open that Okay Now self-signed cert again So what you have to do at this point? is Grab that cert I'm going to Read a folder here You guys so you don't have to See all the cruft on my screen I'm going to Go here and show the certificate. This is safari specific Obviously So Follow the instructions for your favorite browser safari is not my favorite, but here it is Grab that And then what you're going to do is once you've got that certificate You need to add it to the trust store Of your operating system. So in my case, I'm going to go into key chain And I'm going to drop that certificate into key chain And I'm going to make it trusted I'm going to do that for you guys here real quick I'm going to drop it into My search System default search you see there's there's an old one from a previous install Uh, I'm going to pick the one that we just downloaded I'm going to replace Now I'm going to open this up and I'm going to say Always trust Now it's going to make me um, certify that I am me one more time Now Ta-da And I'm going to say yet allow these permissions And now it's going to um, it's going to ask you to create an account now Another important safety tip if you do what I did there is an admin account that chay creates Well, I named my cluster administrator admin. So I need to Give this A different name or I will cause some pain for myself And there we go Let's check up running ready for your code Woo that is awesome sauce. Thank you very much for that that that makes my day This is awesome. Yay Thank you Yeah, I think you've just made the entire eclipse chay community happy too. So well done