 know you're up next with the bare metal which always sounds to me like a heavy metal band kind of deployment and I saw the guitars behind you so it might be appropriate if we pause now and let the AWS lives thing go and let Charo cue up for his deployment and share his screen. So thanks very much there Christian for hanging out with us and I hope you can spend some more time today because I'm sure we'll be repeating some of these questions. Yeah sure I'll be here I'll be here. Do you see a whole bunch of open terminal windows? I do and I see your smiling face and I'm gonna turn my smiling face up why don't you introduce yourself and what you're gonna demo now. Okay I'm Charu Groover I am a new architect for Red Hat Services here in the South East. You have reached the Horizon Audio Conferencing System. App in the tone enter your conference security code followed by the town sign. Let me find you. That is pause for a second everyone and we'll figure out who is doing something odd here with sound. It's like it's Nerlep. Yeah it's Nerlep here. I'm looking for him and I'm just muting him. There you go. All right so start that again. All right carrying on. Like Diane has said a couple of times these are live demos so we're fully expecting a Bill Gates moment. It might not be a blue screen but we might see a stacked trace of death and all kinds of other interruptions but I'm Charu Groover. Like I said I've been with Red Hat for for one week but I've been a consumer of Red Hat products both upstream and subscription based for most of my 20-year career in IT so this is kind of the dream job that I never knew I always wanted and today what I'm going to demonstrate for you guys is a deployment of a bare metal Kubernetes cluster using OCD. This is going to be simulated bare metal in that I'm actually using Libvert to to run the machines so that one so that you guys can actually see what's going on right because it'd be hard to get you console views to bare metal machines in this current configuration. This is a user provision infrastructure deployment so the installer is not going to be provisioning the machines for us. These machines are already provisioned if you see in this terminal right here I've given you sort of a verse list view of the machines that are currently provisioned. You can see we've got a bootstrap node that is not running. We've got three master nodes and we will have three worker nodes and throughout this install I'm going to guide you through the process of deploying the cluster first through the bootstrap process and then we're going to add the three worker nodes to that cluster. Now I'm using virtual BMC which is a tool that comes out of the OpenStack world to simulate the IPMI management of these virtual bare metal machines and these machines are going to boot into iPixie and using the MAC address of the machine as it boots it's going to pull the appropriate iPixie boot configuration file that sets its kernel parameters, sets the Fedora Core OS install URL and the ignition file that it's going to use to start from. I'm using fixed IPs for this particular lab setup so everything is already provisioned in DNS and I'm using a Fedora Core OS tool called FCCT to manipulate the ignition config files to inject the IP configuration into each of the hosts. I've got all of this written up in a little tutorial that I've got out in my GitHub page which we can provide a link to but without further ado we'll go ahead and fire this thing up. So the first thing I'm going to do over here in the left terminal is I'm going to power on the bootstrap node and I'm going to attach to its console and what we're going to watch here it's going to do an iPixie boot. It's a chained boot so it first pulls just a boot.ipixie file is what's being served up by the DHCP server for it to pull from TFTP. That then chains it to look for a file that is named after its MAC address. It pulls that file. You see it got its kernel and its initial RAM disk. The kernel parameters that were passed to it gave it its instructions for installing Fedora Core OS and you can see right now it's actually pulling that Fcause image across. Now we've got an HA proxy load balancer. It's this guy right here, okay D4LBO1 that is already running and is configured to sit in front of this new cluster as it comes up. This will take a little bit with the scrolling logs. It's pulling, like I said, it's pulling down the image. One other thing I'll point out while we're waiting for the bootstrap node to complete its install is that we're also doing a mirrored install today, which hopefully makes this go a little bit faster than pulling all of the images across the wire. What I have is a local instance of a Sonatite Nexus that I have mirrored all of the images into, if you can see this eye chart, and so the install is actually going to pull its images from the Sonatite Nexus. Right now I've got Quay.io in a DNS sinkhole so that it can't resolve and because it can't resolve it's going to assume it's an air gapped installation and it will pull from the configured mirror image. All right, Fedora Core OS is booting now. It's going to overlay the RPM OS tree and when it finishes it will boot one more time and it will start the bootstrap, which we will watch right here. So it just finished the OS tree overlay and now it's coming back up when it completes booting. It began the bootstrap. Fed and fire up the master nodes. I'm just running a little script here that's going to do an IPMI tool command against those three master nodes and start them up and the fans on my little intel nooks just lit up hot and in the top right corner here I'm going to run the OpenShift install command and direct it to monitor the bootstrap process. If you do this at home and you monitor the logs like this, don't be alarmed by these failed, failed, failed entries that you see coming out in the logs. This is the bootstrap process waiting for its resources to go live and so it will continue to loop until the various resources come up and you can see the API just came up. So our API is now live and we're waiting for the bootstrap process to complete. Down here in the bottom right hand corner we're just tailing the journal control logs of the bootstrap process itself. This all in takes about 10 minutes from the bootstrap node firing up to the bootstrap process itself completing. The installation itself will complete after about another 25 minutes. So we've got some time now to take some questions if folks want. James Cassell is asking from Twitch, is the synchole necessary to use mirror? I think it still is. I know it has been for a while that if you don't create the synchole and it can resolve the external host, it will pull the images from quay.io and that's why I created the synchole to simulate a disconnected install where we're on behind a bunch of firewalls and proxies that prevent my nodes from having direct internet access. A couple of questions just to double-check the link to the documentation on this. Is this the same as the stuff that you did in the OKD4 UPI lab setup? Yes, yes. There's a new branch called iPixie that when we're done today, I've got a little more cleanup on the documentation to do but I'm going to merge that branch into master. The old tutorial that was the CentOS 7 based one, I've branched master to a CentOS 7 branch so anybody that still running CentOS 7 would want to use the CentOS 7 branch. I've upgraded my entire lab to CentOS 8 and have enabled iPixie even for the hardware for the bare metal itself. Just by creating an iPixie boot file with the MAC address of a new piece of metal, all I have to do is plug it into the network, click the power button, and it will provision itself with whatever personality I want it to have. I'm just checking the other feeds here. The other feeds are a nanosecond behind us in blue jeans, so I'm trying to do there. Brian Jacob Hepworth is saying that he really likes the Fedora CoroS news and seeing that. So is this going to take us another 20 minutes or 30 minutes here? Well as soon as the bootstrap completes, then we'll be about 23 minutes out from completion. The bootstrap usually takes about 10 minutes in this environment. I'm going to do another pitch for people to join the OKD working group while we are waiting here because that's what I'm charged with. It's getting more folks in. So if you're liking what you're seeing here or if there's features missing or other platforms that we should be demoing to or testing on or that you're using OKD on or wishing to do so, please join the OKD working group. The mailing list is here. I just put it in the chat and it is in open Google group and we have a lot of meetings every week. We meet bi weekly and we have a meeting tomorrow and I'll throw the Fedora CoroS and a chef. Thanks for joining us and we will do the Azure one that you requested earlier. That is our second to last demo. I think today is Azure for the Azure calendar link here. The bootstrap is getting close. OK, bootstrap has succeeded and it's going to wait just a little bit longer to send the event and then you'll see, OK, there it went. The bootstrap is now done. You can see in the middle terminal that we do have three master nodes that are live. I'm going to now remove the bootstrap node and I'm going to take it out of the HA proxy configuration as well so that we will forget everything that we know about the bootstrap node. Now we'll watch the install complete. Alright, so we are working towards 4.5.0. OKD. This is something odd about this install monitor here. It will say 42% complete. Here in a minute it may barf a couple of errors as some of the resources restart and it will also reset the clock. So it plays with you a little bit. You'll get up 74% complete and then all of a sudden you'll see 12% complete and then it will quickly wind its way back up. I'm making a bold assumption here that that is actually the result of it monitoring some of the resources that through this process update themselves and so that percentage of complete becomes a little bit variable. So if you see that running this at home don't be alarmed. It is actually working towards completion and you need to be patient because from this point it does take about another 23 minutes. 23 minutes. Well you want to talk a little bit while you're doing this about the work you're doing around Che? Well actually it turned out not to be much work at all and in fact if we end up with enough time I can deploy a hyper-converged CEPH instance into this cluster to give us a storage provisioner because that's really I think the folks that might have struggled with getting Eclipse Che up and running is that it does need persistent volumes both for Postgres. It deploys an instance of Postgres to support an instance of Keycloak that provides the identity provisioning identity management for your Eclipse Che environment but the workspaces themselves also require persistent volumes. You can probably make it work with ephemeral volumes just understanding that if those pods ever got evicted you lose everything which would be significantly detrimental to your Postgres instance. So it does require that you have some kind of a persistent storage provisioner. I have done it in the past in older 3.11 clusters with iSCSI but now with with the CEPH operator using the Rook operator to deploy CEPH it's much much easier. Something else I'll mention here I'll run this again. So you see we've got three master nodes that are running but they're also designated as worker nodes. That's an artifact of how we're provisioning here because the install config that we used does not designate any worker nodes. So the installer by default makes the masters schedulable. When the installation is complete that's something that we're going to change. We'll add the three worker nodes and then we will make the masters unschedulable. Fernando is asking is it possible to specify a different ignition version during the .ing or .ign, I'm going to say that wrong again, .ignition files creation? I don't think so. I believe it's not possible. At this time you should always be using ignition version three point one point zero for everything. Quite correction ignition spec version three point one point zero. I was about to say I'm pretty sure there's more than the ignition versions don't match the spec version at all. Yeah it's ignition v two point x with spec v three point x and our current spec config spec version is three point one point zero. So for the ignition config always use the spec version three point one at this time. We should probably just bump the ignition versions just to make this a lot less confusing. Yeah because there's no particular reason not to as far as I'm aware. Just going to introduce that new voice is Neil Gampa from Datto is in the house. I just sort of forgot that I hadn't actually been introduced so I'll just oh yeah I can't why is it saying the camera's been used by some whatever anyway the microphone works figure out why the camera doesn't in a little bit. I'm DevOps engineer at Datto I'm here as an OKD working group member and I'm going to be assisting Dusty in a little bit once we once he and I get to our part of this OKD deployment fun and where I will just talk randomly while Dusty pushes buttons and stuff but yeah. I'll walk you through a few of the things that that were prepared ahead of time. I said a lot of words to describe it. One of the especially the way I'm doing this with fixed IP addresses one of the things that you have to provision are DNS records a few key DNS records you can see I've got in here the provisioning for several different clusters that I run but this is this is the one that we're presently looking at right here so each of the master nodes worker nodes and the Etsy D nodes requires an A record the the master and the Etsy D obviously are sharing the same node so so they're going to have a records with the with the same IP address you also need three server records for the Etsy D and then you need a pointer record for reverse look up for each of the of the physical nodes so your masters and your worker nodes you'll need pointer records for those but the as you can see the DNS setup is not onerous but it is necessary here I'll show you what I'm using an open WRT router it's actually a travel router to actually provide my DHCP and Ipixie capabilities so the the boot.ipixie as you can see is very simple I'm echoing some information just to make sure the right host booted and then chaining in an ipixie file that is literally named after the MAC address with hyphens replacing the colons right here that I believe will be one of the worker nodes and so this right here gives it the kernel parameters necessary to boot tells it yes we want to install core OS tells it where to install core OS tells it where to get core OS and tells it which ignition file to use and that's really the secret sauce there not very secret yes you just kind of told the whole world I did I know all right I've already published it in my github so in theory at 84 percent complete I expected to reset the clock at least once while it's while it's doing this but this is how do you determine this percentages because like I don't see anything on screen that would tell you percentages oh right here can you see the oh okay there it is okay it helps when you highlighted it there's a lot of word soup on screen yes there is and this is how I keep the install from being boring is give you lots of journal control and logs to look at because otherwise there's not a lot to look at no no so how did you come up with this setup for I mean you're doing the bare metal right so yeah how'd you how'd you come up with it oh gosh because like I remember that that bare metals like the least flashed out deployment method of them all so the fact that you came up with something is impressive all on its own so that's worth the story I'm sure yeah you know I back in at the end of 2017 I got addicted to the intel nook machines and you know those little form factor boxes are they're not they're not cheap comparatively but considering the amount of compute that you can pack into one of them for a for a home lab setup they are pretty affordable and if you buy the right chipset you can put 64 gigabytes of RAM in one of those little suckers so you know you get one with a core i7 the newest ones the the 10th generation they've got six cpu's so you've got 12 vcpu's available and 64 gigram you you can run quite a bit on them and and my idea was actually get an open shift cluster running on the the nooks um and then I stumbled across this thing called nested virtualization with libvert and um well I don't do open stack I had a curiosity about it and that's how I came across virtual BMC and and so decided to basically bump it up a level and um used libvert virtual machines with virtual BMC to simulate bare metal and then it was just sort of a I want to make this work so I powered through making it work to get bare metal install of okd up and running um submitted a few tickets to the fedora core os team that they were very very very gracious to help out somebody that didn't know what they were doing um I had never you know touched uh core os before so so that was quite a bit of a learning experience and thanks for being part of the community yeah yeah dusty and those guys were they were incredibly helpful um and so it's it's kind of involved from from that point the the latest iteration of it now uses the the fcct tool to inject um some customization into the machines um actually while we're while we're still waiting for that oh there hey quick here here's the reset I was talking about see how we went back to zero percent complete don't panic um I don't know why it resets the clock like this maybe somebody in engineering um could tell us but it is still progressing I assure you that is very confusing and kind of frightening uh actually it looks like it resets after it downloads an update so it probably loses all of its state when it does that yeah that that that's my suspicion because it does go through several um iterations of updating some operators yeah so it's just probably losing its state every time that happens which is unfortunate and I'm not sure if that makes sense but the best I got it still works that's what yes that's the important part so don't freak out what it goes from 80 to 90 to zero yeah so right here if you guys can if I don't know if this is readable but but you can get to them I get hub page so so this would you zoom it up just a little bit just zoom it up one level there we go then it's readable yeah this is a shell script um the that I wrote that actually does the the provisioning of the of the quote unquote bare metal for me uh and right right here um this is a yaml file that gets created where it's injecting the um customizations that I want each of the machines to have so in this case what I'm doing is I'm creating a um basically a rename of the primary nick um to nick zero so that it doesn't come up as some funky E N P blah blah blah blah blah I want it I want it to be more than predictable I want it to be predictable and known and so I'm using the MAC address of the machine to explicitly name that network interconnect device as nick zero and that way I I always know what it's going to be and where it's going to be and then I inject into that it's um specific configuration so I'm setting you know it's it's name server it's domain it's IP address with the net mask and gateway and then I'm also injecting its host name so that it persists its host name there's a bunch of other stuff that the that the script does which is one thing I am going to do I'm going to add um better comments into this so that if any of you are are looking at how this thing is working um you'll understand what each of these sections is doing all right we're back up to 84 percent complete at this point um I'm going to go ahead and fire up the worker nodes it is safe to do so now I actually could have done it a while back but I'm going to go ahead and do it now so I'm sending each of them an IPMI command given a 10 second pause and in between each one just so they don't slam my poor little router with um DHCP and file pull request at the same time we'll go ahead and watch one of those guys do that there's one of the workers it's going to do the the same thing that you guys saw the bootstrap node doing um it's pulling the the core os image right now and then it's going to go through the same process uh except that it will retrieve its ignition file once it once it processes the initial ignition overlays the the os tree and starts um it's processed to join the cluster it's going to get its its ignition file from the cluster that will give it the personality of a worker node and if you watch the left hand side of the screen um closely you you should see it hit a point where it's um waiting on and then you'll see it very quickly pull that ignition config and at that point it will start to join the cluster oh there it was right there the the start job and there it go it got its ignition and so now it is booting up it's going to ask to be a worker node so just to give you a quick update on the aws cluster it's still waiting for the cluster api to come up um i do have to leave now for like 15 minutes 20 minutes um i'll be back after that and i hope my cluster will be up by then and i'll see you in a little all right see see you in a bit christian our cluster is up and you see awesome it gave us our initial password so let's go ahead and log in and prove to the world hopefully that this little guy is a lot and as before um self-signed certs so and whatever os and browser you're using you are going to have to accept those certs it's okay self-signed certs are fine all right now it creates a temporary cluster administrator for you and that it dumps that password at the end of the install process that you can use to gain access to your cluster and there we are now there will still be some operator updating things going on and your control plane will still be settling out but at this point we have a live cluster if you will indulge me for a few minutes we'll go ahead and finish adding the worker nodes and then we'll do a couple of housekeeping things on our cluster so you see we've got some pending certificate signing request that is also an artifact of the way we're doing this user provision infrastructure install is that it's not automatically going to approve those certs because it doesn't necessarily trust anybody that wants to join the cluster so i'm going to approve those certs and there should be another batch of three they're going to come up pending yep and so now we have three worker nodes they're not ready yet they're still completing their own personal bootstrap and that'll take a another minute or two for them to come live and i'm going to do a couple of house cleaning things here one is i'm going to remove the samples operator because it unless something has changed recently unfortunately christian isn't here we can ask him later the samples operator because you don't have an official red hat secret at this point it won't be fully functional and can in fact impede updates to your cluster so i yank it out not using it anyway at least at this point i'm also going to create a ephemeral storage for the image registry because it will also be in a removed state because it doesn't have a persistent volume so i'm patching its configuration with an empty dir specification for a persistent volume and i'm going to create an image pruner to run it midnight every night because the the console will gripe at you um if you don't have an image pruner configured until you do so anything older than 60 minutes it's going to prune at midnight every night or 60 days rather 60 minutes would be um aggressive yes yes i mean i i don't know what kind of storage you have but 60 minutes might be appropriate if you basically only have enough for the cluster itself to run and there we are we have yay cluster okay now huge caveat our our masters are still schedulable our workers are schedulable but that's not bad well it's not but there is a gotcha in here which of course i never tripped over um your ingress pods will deploy on a schedulable node well if um your load balancer is only configured to look at certain nodes here you see i've got my um the port 80 and port 443 and port 6443 they're all directed to the master nodes well if those ingress pods got evicted and rescheduled themselves on a node that is not in your load balancer configuration then you will lose access to your cluster important safety tip so so the key the key here is either to span your load balancer which i don't really want to do because that's a lot of extra cruft in the load balancer configuration or designate some infrastructure nodes and that's the path that that i chose to take so what i'm going to do real quick is i'm going to designate my master nodes to also be infrastructure nodes why doesn't it do that by default um well because the the the best practice is to create a couple of worker nodes that you set aside as infrastructure nodes why i don't know good okay just making sure because like i've seen these recommendations listed in the documentation but there doesn't seem to be any particular reasoning to back them up like historically speaking i've seen clusters typically do the masters as infer nodes because that way they handle essentially the stuff that keeps the cluster itself running and the worker nodes are free to out work on developer user workloads yeah i think one of the things you need to consider is how how beefy you make your master nodes you know if you've got heavy heavy heavy ingress operations um you know given everything else that the master nodes are doing um that that might be a little overwhelming for them in my particular lab environment um the the master nodes are heavy weight enough each of them has 30 gig of ram and um six bcp use so so i feel pretty confident um designating them as infer nodes so what you do once you once you run this label on them then you need to patch the scheduler so that the master nodes are no longer schedulable you'll see right now they are infra master and worker nodes when i run this now they're just infra and master nodes now at this point nothing got evicted off of them so if you want to boot things off of them that you don't want running on there anymore um you do need to either go through and evict all the pods that are running on each of those nodes manually or reboot your master nodes which is a bit more of an aggressive way of doing it now i'm going to patch the ingress operator to tell it that it's okay for it to run on those master nodes and if you can read the i chart here i'll explain what it's doing it's setting a node placement policy um giving it a match label of infra node it's also that's not enough you also have to set some tolerations because the master node is now tainted um so so you need to give it a toleration that it's okay for it to run with a node that has a taint of no schedule and a taint of master node and so now that that is done you will see the ingress operator uh one of them is terminating there's a new one running that is not in a ready state yet as soon as this one is in a running state the second one will begin terminating don't panic that your other one sits in a pending state for a while because it has an anti affinity rule that it won't run on a node that already has an ingress pod running on it so it has to wait for one of those terminating pods to complete terminating before it will schedule on the master node wow and so there you go now we've got one running we've got one pending and we've got two terminating and it will remain in that state until one of the terminating pods completes terminating and then the anti affinity rule can be satisfied and the the pending pod will also deploy and these take a while to terminate because they're shedding load that they're they're gracefully shutting down okay there you go so one of them is done terminating we now have two running uh ingress pods um one of them is in a ready state one of them is still bootstrapping the last thing i'm going to do is get rid of that kuba admin account because its password is sitting there in plain text in your installation folder so oh so it does get written down to this somewhere yeah i was i was gonna ask are you just do you have to make sure you you save that output text or will it actually be somewhere where you can get to it yeah if you if you look at the the directory that you used for the installation so there's um you know there's the boot the the ignition files that it created and the metadata it creates an off directory and in that off directory it creates an initial kubic config which you can load to give you access um to your cluster directly from your command line and it um dumps that plain text password right there but if you get rid of the kuba admin user doesn't everything that like links to the kuba admin user break it's a temporary account so here's what we're gonna do um i i created an ht password file uh ahead of time my tutorial um has instructions for how to do that um so so i've got an admin user and a dev user with passwords already in there um you saw me just create a secret right here so i apply i created a secret in the open shift config namespace called ht password secret from that file and now i'm going to apply a custom resource that i've already here let me um so this is the custom resource that we're going to apply um it's setting up an ht password identity provider and it's going to link it um to that secret that we just created the ht password secret so i will apply that it complains that i used apply instead of create but i'm just in a habit of using apply to update objects so you can ignore that that complaint there and then the last thing i need to do is this admin user that i i just set up a secret for but doesn't exist i'm going to give him cluster admin rights and now i'm going to be brave and i'm going to delete well it also says the admin user doesn't exist that's correct um but it creates it in the background what yeah it's not intuitive no or obvious but it does and it works okay and so there we go i just logged in with my new somewhat more secure cluster admin account and you can see our four green checkboxes we've got a happy cluster um it will complain about alerts until you like set up a slack channel or something to send your alerts to um it's actually pretty easy to do you create a receiver and walk through it um but i have used up most of my allotted time so i'll stop playing now and i think the playing is fine no i'm going to give you that that was easy button yes all right well played and um can you do one more thing for me just um because i think people who keep asking me these questions go back to the console and show uh the operators that are installed in your installation sure i will do that all right so you go to operators operator hub um are there no operators found those operators don't exist they're installed i think you know it may still be it may still be updating well the operator hub operator might not actually be up yet yeah because it does it does take a while after you know that that initial install took us another 23 minutes it does take things a while to settle down um let me let me show you what it does look like because i have another cluster that i um stood up this morning um um it seems less healthy uh yeah i think i i think i did something to upset it um but here's the here are the operators that are available quite a few you can see there's if you want code ready workspaces the the upstream of it um eclipse che is in here do you have enough time to try and install the eclipse j one um i'm my especially if you don't mind going a couple minutes over because the first thing i need to do is um deploy oh actually no i can't because i've already got let let me make sure i've got um def deployed in this cluster so we're going to go to the rook sef namespace yes yes we do the fact that the rook sef namespace kind of indicates you have it set up well it doesn't it doesn't it shouldn't exist if you don't have it no it it can exist and i haven't completed the install yet well okay there's that so we'll go back to the operator hub and we'll find the eclipse che operator and yeah it's a community operator if i call redhead they're not going to help me with it but if i go on the slack channel they're usually nice enough okay and unless you want to do something different about it you install and we're going to keep the stable it is going to create the eclipse che namespace and we're going to let it have an automatic strategy for um its approval if you switch that to manual then when the installer installs you have to go to the installer and then say yes you can actually install that seems painful well if you think about it you know i'm doing everything as a cluster administrator um so if you're not a cluster administrator but you you know you want to request something um that's part of what what we've got going on here because there's all kinds of configurable r-back um capabilities within this thing so when you install this operator as an as a cluster admin does that mean that anybody who logs in with an account can then instantiate it afterwards absolutely yes absolutely the workspace is people will be able to get in and create um workspaces again um you know it's got lots of roll roll based access control so that so that you can control who can do what but yes anybody that you've got created an account in in this cluster should be able to log into che create an account in che which will provision them into the key cloak instance that it's going to create and then they can create a workspace so let me switch this real quick to the workloads okay our operator is running it is alive so we should be able to provision a che cluster and you see what i did from the from the operator here's the installed operators the provided apis um that's what i clicked on to get to this view here that i can now create a che cluster um it's going to name it eclipse che unless i tell it to do something else lots of things you can configure in here i'm going to take the defaults on everything uh except storage and this is what i was mentioning earlier that i believe um has probably hung some people up is um postgres is going to need a pvc and then any workspace that you provision is also going to need a pvc which almost requires that you have a dynamic storage provisioner for this to work so i'm going to give it the name of the storage class and actually i'm going to cancel out of this go down here to storage show you that we do in fact have a storage class it's a block provisioner as part of sess and when we create our cluster i'm going to tell it to use that for postgres and i'm going to tell it to use that for the workspaces uh also know each workspace is going to get a gigabyte of provision storage that may or may not be enough depending on the type of development that you're doing um that's pretty minimal so you might want to crank that up to five or 10 gigabytes depending on you know how how big the artifacts that are going to be built in the code base and you know everything about the development environments that that you're going to be working with so i'll create create on that switch back to the pod view and you can see it's provisioning um postgres hopefully our storage provisioner is working and we do in fact have a postgres data that is bound so our storage provisioner is working okay postgres is running not ready so it's still it's still deploying itself and this will take this takes a couple of minutes and then key cloak is going to provision itself um after postgres is done so now key cloak is provisioning and key cloak actually goes through a couple of phases it has an init phase um that it that it runs through so you'll see that pod come up and then terminate and and be replaced by another key cloak pod that will be your your final configuration and you won't see the the chai controller come up until both postgres and key cloak have completed their provisioning and about how long does that take ian has to ask if not terribly long um a couple of minutes cool it feels like a long time when you're staring at the screen it's all right i have plenty of coffee today and um michael has just pointed out um maybe there you still have quay.io block via dns uh you know what i i i don't that was a good point out i snuck that in while neil was talking um i right here i blasted a command to my dns server to remove the sinkholes for quay.io and for registry.service.ci.openship.org i did actually notice which is why i didn't repeat the question that he was saying because i figured on screen it was obvious that you got rid of your quay.io block no i i slipped that in and and didn't mention it well now all right so we've got um key cloak is is bootstrapping itself now so you'll see you'll see some activity go there all right and there it is so now you see another key cloak instance um provisioning and it will take over from the first one here in a minute as we all wait with baited breath in other news uh christian says that his full blown aws cluster has finished installation so when we're done we'll pop over and let improve that and then we'll we'll grab dusty when he's back and we'll hit up the digital ocean stuff okay the cloak is running any of you who are joining us for the digital ocean um demo we'll probably get started on that one a few minutes after the hour um we're running pretty close to on time which i think is amazing indeed and we'll we'll probably lose that thread at some point but hey a quick plug for my favorite java framework quarkus there we go there's the quarkus ad thank you and and and what does that have to do with this well once your cluster is up and running you gotta run something in it right oh so you're gonna make something with quarkus okay so that mad programming skills yeah yes indeed so so the the first key cloak instance you see it terminating now so it's getting itself out of the way the plug-in registry is fired up now you see other activity there's our chay um controller right here that is creating we've got a dev file registry we've got a plug-in registry and as soon as this guy becomes ready i wish you could hear the fans on my little nooks i wish i had a fan here the temperature was popping up here in uh canada on the west coast it's are we going to hit 32 today so all right so all of the resources are up they are all in a ready state we've had no restarts which is always a good sign although occasionally a restart is not necessarily a bad thing if we click over here to routes we have a route for chay and if i'm brave and open that okay now self-signed cert again so what you have to do at this point is grab that cert i'm going to read a folder here for you guys so you don't have to see all the cruft on my screen i'm going to go here i'm going to show the certificate this is safari specific obviously so follow the instructions for your favorite browser safari is not my favorite but here it is um grab that um and then what you're going to do is once you've got that certificate you need to add it to the trust store of your operating system so in my case i'm going to go into keychain and i'm going to drop that certificate into keychain and i'm going to make it trusted i'm going to do that for you guys here real quick i'm going to drop it into my certs system default search you see there's there's an old one from a previous install i'm going to pick the one that we just downloaded i'm going to replace now i'm going to open this up and i'm going to say always trust now it's going to make me um certify that i am me one more time now tada and i'm gonna say yet allow these permissions and now it's gonna it's gonna ask you to create an account now another important safety tip if you do what i did there is an admin account that chay creates well i named my cluster administrator admin so i need to give this um a different name or i will cause some pain for myself and there we go let's chase up running ready for your code that is awesome sauce thank you very much for that that that makes my day this is awesome yay thank you i think you've just made the entire eclipse chay community happy too so well done