 seven times soon. We meet on Tuesdays at 9 a.m. Pacific noon Eastern and it's a pretty active group of folks so we'd love you to come and I think I recognize a number of your names from the working group already so hopefully you're part of the conversation that's been about building and learning how to build these things. So again thanks for coming and joining us and here's Charo back as I ad lib widely for a few minutes and can you hear me again? I can hear you fine yes. Excellent okay I am going to take the blue jeans window and move it out of the way that I can share my screen and I did set up a temporary screen for sharing that is of reasonable size so that you don't have to squint at my giant monitor. All right well I was hoping Jamie McGuire, Magira rather not Maguire was going to join us as well but he might have gotten caught up in something else this morning the University of Michigan so he's our other co-chair for the okd working group and hopefully he'll be joining us shortly and so Chris if you are broadcasting this on Twitch you feel free to go live whenever you are ready. Okay if I throw your first slide up or wherever you're at and we'll get rocking and rolling. Okay you should see the front end of my car. Okay now pause there for a minute and let's see if I can get a chat out of Chris Short when you are ready to really start going on the Twitch stream. All right all right we're streaming now cool all right well we're streaming Andrew are we streaming on Twitch or are we streaming on blue jeans? And here's Neil has a whole bunch of folks from the working group joining now so awesome because this is one of those mysterious processes that everybody should know how to do so that whenever we get hit by a bus we can go. Let's see. Twitch and YouTube we're live so everybody thank you for joining us today we are going to follow along with Daryl Gruver and learn about the okd build process for code ready containers and as he has launched a new blog post I'll let him talk a little bit about that and introduce himself a little bit more deeply and we will and I can see now that Jamie has joined us as well welcome Jamie and we're going to rock and roll and hopefully turn you all into code ready container builders by the end of the day. All right take it away. Excellent well thank you everybody for joining I think this is going to be a very fun and exciting of our OpenShift Commons. I am Daryl Gruver I'm currently an architect with Red Hat Services been with the company for about a year now prior to that I was a Red Hat customer through most of the 20 years of my preceding technology career and really one of the things that that drew me that this direction was the the open source nature of everything that Red Hat does and that's that's what I'm going to show you guys today the the code that we're going to be dealing with in the projects I'm going to show you how we take ready containers the Red Hats supported distribution and leverage it for OKD is is directly because all of the code is out there it's on GitHub it is purely open there's no additional secret sauce that gets added to it Red Hat is very passionate about open source and has been pretty much since its inception so with that said I'm going to dive in here because our focus today is building code ready containers for OKD so that the OKD can benefit from the same capabilities that are available to our subscription based customers that are using Red Hat OpenShift on various hybrid cloud platforms that are available out there and leveraging code ready containers on their developer workstations I have written a blog post and that's what you see on the screen right now that that is related to exactly what I'm going to talk to you guys about today if we'll we'll post it in the in the chat later actually Diane's got the link to it so she can probably drop it in there and right before this I did create a Twitter handle for the blog that I started a few weeks ago at Diane's recommendation so you can also find me there the the blogs that I write are predominantly OpenShift Kubernetes focused and they are very very homelab focused so if you like to tinker if you like to build your own things this is a place where I like to throw out all of the interesting things that I do at home today we're going to talk like I said about code ready containers it's the successor to MiniShift many of you from the OpenShift 3.x days probably remember MiniShift which it was itself a packaging of MiniCube its sole purpose is to enable you to develop applications for OpenShift or to learn OpenShift on your local workstation it does require a fairly beefy environment to work your workstation really needs at least 16 gigabytes of RAM for code ready containers to be usable with all of the other things you're going to be running if you've got you know 50 browser tabs open and you've got you know your IDE, VS code or IntelliJ whatever your your religious preference is for IDE the the addition of care ready containers is you know is going to require some some fairly beefy workload so make sure you've got a well apportioned workstation before you try to use this and for building it which is what our focus is going to be today you're going to need a fedora or a rel or sentos based operating system a mini server to run this build on because it does it is very opinionated towards a libvert install in fact it's it's using what is effectively a installer provisioned infrastructure what we call an IPI installation build the cluster that is going to be the heart of code ready containers and in this blog post that I've got here I've got several links for you to go to that you can see documentation information or about a lot of the underlying things if you're new to the OpenShift ecosystem I suspect most of you on this call are not are not new to this ecosystem and you're wishing that I would shut up and just get straight into how to build this thing so that's what I'm going to do right now so code ready containers what it actually is at its heart it's a single node OpenShift cluster single node OpenShift cluster that is built with an opinionated installation process that strips out as much of the weight of a full OpenShift environment as it can so that it can run on effectively and be usable on you know the the hardware and memory capacity that you typically find with a with a good size developer workstation that single node cluster when it's built is stripped down opinionated to to run as small as possible and then it is shut down and a QCOW 2 image is created of the virtual machine that that single node cluster was running on and then the code ready containers binary which is written in go is compiled and built and that QCOW image is embedded inside of it so if you run code ready containers on a windows workstation it's going to use that QCOW 2 image be a virtual machine using Hyper-V if you're running on a Mac book like I'm running right now then it's going to use Hyperkit to leverage that QCOW 2 image and then up your CRC instance if you're running on Fedora or or another Linux OS it's going to use the underlying Libvert KVM so the whole heart of this thing is that single node cluster that gets turned into a QCOW 2 image it's a three step build process that I've I've documented out here so I'm going to walk you through this real quick and then I'm going to pivot over to showing you a build that I just finished and a running instance that is halfway through the build process so you can see what that single node cluster looks like while it's while it's running before the QCOW so like I said a step process that requires some initial setup and I detail out the initial setup here like I said you're going to need I'm using sentos 8 stream and so if you if you follow the instructions that I've got here with a sentos 8 stream there should be no modification required to these instructions whatsoever if you're on a an upstream Fedora there may be some nuances here that I'm doing that will have to change if the upstream operating system has changed out some of these features I haven't run this on Fedora so so I don't know but if you do reach out to the GitHub page where I've got this and let me know what your experience is you're going to need a Libvert environment you're going to need Golang you're going to need D plus plus the GCC compiler installed and then all of the typical tools that go with that so I've I detailed out here the off of just a minimal sentos 8 stream install what else you need to add so as you can see it's a it's a relatively short list of packages get these packages installed on your you're ready to build code ready containers next thing you need to do is create a firewall zone because the we're going to expose Libvert to listen on a TCP port and especially if you're going to do this on like an EC2 instance or an instance running on Google cloud probably not a good idea to expose this port outside of your machine so create firewall rules so that when we expose this Libvert port you're not inviting other people to come and play in your virtualization environment important safety tip there they the other thing on on sentos 8 and this is a this is a change about between 7 and 8 or if this is a change that happened to me on an update of sentos 8 but it's no longer sufficient to enable the TCP port and then just restart Libvert D there's I'm going to call it a sidecar my head is full of Kubernetes right now there's effectively another system D service that you need to enable and start along with Libvert D that enables that socket listener so you need to do this so that the socket listener actually picks up this configuration that we created in his listening on port 16.5 The next thing that I've done here is I took a whole bunch of boilerplate that you need to do to tell the project that you're building this single node cluster for OKD there's a bunch of environment variables that you need to set I've included the instructions here to just drop those into a shell script so that you can create the environment just by running a command before you execute the build and one of the things I'm going to point out as we walk through this is all of the opportunities here for automation that's that's one of the things that that we're missing is with the OKD community we don't have a CI environment that we can run this on I'm sure there are environments that we could leverage you know a lot of the a lot of the red hat upstream projects they have CI environments so it's not a matter of getting the environment it's more a matter of getting the people work on the CI so that we can leverage an environment there are Fedora resources there are resources that are associated with the upstream open shift maybe even the engineering team that is behind code ready containers would be willing to help us out but until we get some volunteers that say hey I want to be part of this we really we haven't started any of those companies another internal red hatter who I won't call out by by name because I don't want to embarrass him but he he and I recently talked and he is all in on learning to do this so I know we've got a couple of volunteers out there already um but we'd really like to get some community folks too because this this doesn't need to be just a a red hat an internal effort we want this to be community okay in the public service announcement back to the build so these variables here um most of them are just setting up the fact that we're going to build with an okd image this one here this um terraform variable that we're setting to see is a bootstrap memory this one is an important one and important for anybody that's going to be messing with single-node clusters with libvert outside of carried ready containers this was a battle that I fought for a while banging my head against the fact that with a recent the dork or os release the default memory size uh that the bootstrap node was coming up with and the uh temp file system that it created out of a subset of that ram wasn't big enough to hold the os tree so your bootstrap would crash into the file system capacity in that ram fs and blow up before your bootstrap could get started so so setting this variable here overrides the default behavior of the bootstrap node to whatever ram size you want to have the other couple of things here that that are important is the telling the lib guest fs be to use direct back end and the uh open shift version that we're going to build here and this is a this is just a cute little useful um bash command that folks may want to harvest for any automation that they're doing where they want to do something with the latest release of open shift as dropped by the open shift community um be right now is vadim so when vadim releases a new build uh if you execute this curl command here with the little pipe to cut pipe to cut uh you will get the string of the current release that then you can use to go mirror the images uh pull down the oc and the open shift install commands whatever you need to do that that leverages that um version string then the last thing is owning the repos um and these two repositories here are currently hanging off of my personal github account i've got a couple of pull requests prepped but i haven't um submitted yet because four because the code ready um engineering team has already moved on to open shift 4.8 and preparing for open shift 4.9 development the the current crc build that is off of the the official github page does not work with okt there's a couple of changes that have been injected in that need to be fixed so that we can still build okd 4.7 and when it drops okd 4.8 um one other thing that i'll say that that i'm going to throw out soon is that this same process that i'm showing you here um while this one is opinionated toward an okd release with just a couple of tweaks to uh what i've done here you can also make this build nightly releases so if you were into running some automated testing or you just wanted to try out some features of say okd 4.9 uh before it drops or an upstream 4.8 release that might have a fix in it that you're looking for you could use the same thing to do that and in fact you don't have to go the whole last mile of building the crc binary a usable single node cluster okay and i'll show you that in just a minute the official github pages here dot com slash go dash ready if you go here they have the two primary repositories that i'm going to talk about today pinned to the top snc and crc okay i have those um forked over to my personal github with the um necessary changes that are currently needed to build uh already containers for okd 4.7 go here this is this is hanging off of mine where i have my my current fork but and like i said a couple of pull uh pull requests away from getting that back into the um the mainstream so the way this works back to to where i started it's a three step process first step is creating the single node cluster and the single node cluster and the second step which is creating the queue count image those are both in the snc project which sounds stands for node cluster go to the snc project you'll see that it is the single node cluster scripts open shift four this one is is purely bash it's it's shell scripts that view the opinionated installation of the single node cluster and then it's a second set of scripts that take that running single node cluster do the manipulations to it so that it can be bundled up and then tear it down and create the queue counting the dot of it and that's what i show you here that you you run the helper script i created which sets your environment for you you um cd into the opinionated directory structure that i've created for you that's the latest check out the okd 4.7 um build branch pull down any changes and then execute when you run snc.sh it's time to go grill a cheeseburger make a salad pour yourself a beer maybe a couple of beers because this will take a while it's going to go through a complete load cluster installation at the end of which oh i just get to the wrong one again at the end of which you will end up with a running cluster now i'm going to ask real quick is this uh is the screen um uh readable or do i need to crank up the font on my on my bash shell i'm always going to say crank up the font um um font is crank there you go how's that jamie is that working for you good all right all right so you'll see now this is this is the end result of having just run that script this window down real quick and we'll we'll do a quick way back machine check this up a bit too here we go yeah this is what you see while the if we go if we go up we go continuing to scroll fun because it's fun to have videos that lots of scrolling uh upward upward upward almost to the top okay good my mouse was just about to run off of the mouse pad there all right see you see here i i ran my um setup script which created my environment i fetched and i checked out and i pulled and ran s and c dot sh okay and and what that does that that then starts the process of building this single node cluster so the first thing it's doing it's interacting via that port 16509 with my underlying libvert and it's creating the virtual machine and then it goes and it tells me that it's using release from august 22nd which is our the latest that we currently have out there and it's pulling down the um client the installer so it's got the oc command and now it's got the open shift install command does some configuration then it kicks off the terraform in the bootstrap and boom we're often so here you go see it does what is effectively like i said early on it it is an installer provisioned infrastructure install of open shift so i'm not going to read through all of this because i know you guys don't want me to uh but you'll see here it starts once the cluster once the api is up and running uh which will happen see it's it's probing it's sitting there in a loop it's probing waiting for the nd api to be up and it says aha api is up and at that point it's starts doing some things to the cluster here it's setting one of those um nice fun unsupported config overrides so that it can run as a single node cluster and then the bootstrap continues to run till the bootstrap is complete and then it tears down the bootstrap resources and then it sits and waits for the cluster to initialize to pause here again for a minute because there's another thing that we the okd community can do with this i'm envisioning something that we can set up with some ci that will allow us to run uh opinionated and automated tests against builds one of the pain points that we've had i think that we would all agree to as an okd community is that the the tests against the nightlies that they're not as thorough as we would like to really uncover any places where a you know a new fedora core os release might have broken something we don't get to it until we actually try running this thing with a you know with a full install i have a i have a hypothesis that if we built some tests around this code ready containers build that we can actually use it to test nightly okd builds if we wanted to or at least test our releases before we drop them as a release so that we can validate yes a a full running cluster can be created the bootstrap process completes properly things look like we expect them to when the bootstrap completes and then as the installer runs installing installing installing we can probe along the way to ensure stuff that uh looks the way we expect it to and then when our cluster is the initialized we can then run another bank of tests validate that it is in fact a the running open shift cluster okay now i'll get back to the code ready container at this point in the process it is it is completing the the install process and now it's starting to do the opinionated activities that i was talking about so you can see here that it's it's deleting a lot of machine configs it then does some modifications to those machine configs that it injects them back into the cluster parrot for becoming that that qcow image that we were talking about and then very in and this is where is where it's doing the rsync of of all of those configs after it's done some patching to them the last thing that it does is cleans up all of the completed pods because again even though they're using ephemeral storage their logs and things are actually occupying space on that virtual machine and so by cleaning up all of these deleted pods and cleaning up other things that are more ephemeral within the cluster we're preparing that virtual machine to be compressed down to as small an image size as possible okay so at this point like i showed you um the single node cluster is up and running all the pods running everybody's healthy everybody's happy no crash looping see it's load in a ready state and its role is both master and worker node and so from here now once we have the single node cluster up and the next step is to create that qcow image and it also is just a single script command the create disk sh you pass it the the var that is just the the installation in the s and c project where it put all of the information about that single node cluster and where that qcow image is going to end up this will run for a very long time because after it creates the after it creates the initial q mu image the qcow 2 it then is going to create a bundle with that image that is hypervisor specific so it's going to have to do this three times it does it for libvert it does it for hyper kit and it does it for hyper v and then it compresses each of those and and this might be an area where where some efficiency could be added because it does it sterile so it doesn't kick these off in in parallel it it does it sterile and it does take a long time because it's creating a significantly sized image and then it's dipping that significantly sized image so you go from let's look at one here in the scene and since you demo i've got the previous build that i did with the with the releases that i haven't pushed out yet okay you can see that the bundles that result are on the order of three gigabytes each and that what they're built from so like this hyper kit here that's that's 10 gigabytes worth of stuff that it took and turned into a three gigabyte bundle image and it's doing that three times okay so i'm showing you this just to say don't think something's broken it's going to sit there for a long time the create disk takes a while so you can go out for dinner while you're waiting for it to come unless you have access to some really significant hardware and some really fast disk then it might not take quite so long but my poor little nuke eight i3 that it runs on it takes all right well the next step the third thing which is building the crc executable that actually doesn't take too long so once the create disk shell has completed and we're back from dinner and we've had our little gram of dram buoy as an aperitif to get relaxed for the evening ready to build our code ready container image this is where we use the other project which is prc and it's the it's the go code for creating code ready containers the binary all right so we go in there make clean and then a make embed bundle and make embed bundle what it does is it it does a cross compile of the crc binary for windows mac os and linux then it bundles into that cross compiled binary the crc bundle that i showed you previously for the respective architecture so the crc.exe gets the bundle for hypervcrc for the mac gets the hyperkit bundle and so on and so forth at that point done and in the out directory of your crc folder you'll have an architecture specific path with the bundled binary for that specific architecture and that looks like you see i've got a linux amd 64 a mac os amd 64 and a windows amd 64 each of them are the respective executables and this is the point where now that i'm now that i'm done with this and i've got the executables built i will test it on my mac i will test it on one of my linux servers i don't have a windows operating system anywhere in the house not because i'm as bigoted as i used to be toward windows or microsoft it's just because i don't have one so i actually can't test the windows version so at this point is where i push the binaries up to the fedora server that the fedora community was kind enough to give us some space on and then when you guys go to okd.io and download code ready containers it is there and ready for you to use and hopefully it hasn't been 30 days since i was able to build a release and push it up there otherwise you guys go download one and the certs have expired and we get lots of issues logged that crc for okd is broken again and i go oh crap it's been 30 days and i haven't built a new one and that's why we're here talking to all of so i'm going to thank you charo and we'll see if there's questions commentary there's been a couple questions let me go over them for folks that are just watching here and are growing through the questions um one of the questions was uh do you need a pull secret from redhat.com as part of this setup if not what is used in its place or what mods are needed and yeah so basically you there's actually an example in charo's documentation and i put something in the channel if there's actually something in the okd documentation of uh you can use a fake pull secret uh just a jason string basically that has that um and but it's worth pointing out that getting a redhat pull secret isn't hard and it doesn't cost you anything if you log into um the url console dot redhat.com and navigate to the open shift part and click on any of the installers there's a little button there to create a pulls secret and actually i can we can put the hyperlink in with the meeting notes um and it doesn't you don't have to pay anything to get to get that uh if you use a redhat pull secret you do get access to more operators uh in your operator hub within the club so there is some advantage to that yeah and if you go to the the the official code ready please um you do you do need to sign up for a redhat developer account but those are also free uh and and with that then you can get your official pull secret and use the you know the non okd code ready if you're still in yeah another question we had is has anyone tried are there instructions to deploy okd using docker compose instead of virtual machines i've not seen anything charo have you seen anything along those lines no i i haven't um and i'll stop sharing so so i can see you guys and you guys can see me um no i haven't seen anything around that um maybe i'm going to speculate i know the reason for me is so darn easy to do it with libvert that there wasn't a reason to look for another way to do it um and it even i will even say this it even works with nested libvert uh because i used to run these builds on a virtual machine that was running on one of my looks that had a lot more cpu and ram and so i actually ran the i would actually provision a nested virtualization and then it would it would run all of the steps that i just showed you on that virtual machine so so yeah a long answer to say write that hey jimmy there's um brecht tofels having tough time with the um the q and a um widget but he's asking that he was a little confused around the last part of what charo was talking about about the tie-in to the 30 day timeout of the cert and just explain what you were talking about there a little bit yeah yeah yeah and that's something i was actually actually thinking over the weekend that there's there's a way we could we could fix that in the crc executable it's um because somebody posted their very clever workaround for it in one of the issues what it is is the the cluster when it first comes up it has a cert that's only good for about 24 hours and the s and c logic in it um i realized i'm staring down at the floor because that's where my screen is instead of looking at the camera to see everybody anyway the the the script has logic in it that um deletes that temporary cert the 24 hour cert and then waits for the certificate signing request the csrs then show up on the node so that it can improve them and at that point now you have a 30 day cert well after 30 days any good anymore and if your crc instance has been shut down there's been nothing to create that csr or react to the csr and so co-grady containers working after 30 days that there may be some other things tied into that but i believe that's really the essence of it and the workaround is during the bootstrapping of your crc instance um you run crc start and it's coming up and then it won't come up because cert is expired now you can actually export kube config oc into the cluster and approve the csr and if you approve this csr then crc should continue um i see no reason why we couldn't put that logic into the go code of crc itself so that as crc is starting up one of those because it creates uh it creates ssh keys and it injects the ssh keys into the cluster it does a whole lot of things i don't see any reason why one of the things that it does couldn't be to um use the logic to check to see if the cert is expired or even about to expire kill the cert and wait for the csr request and approve the csr request that makes sense you're muted jim there we go uh excellent we got another question here if s and c.sh fails to run can i just run it again do i need to do a clean operation no you can't just run it again it is unfortunately not item potent um it leaves a giant mess behind uh if it fails to run let me let me share again though because i have a fix for that too because um that happens to me a lot especially when the um code changes sure it stops working you guys see my screen again do you see my screen again yep okay good okay so um at the bottom of my blog post you see this post build cleanup here um this actually also works if uh s and c.sh fails the same thing with the create disk.sh if create disk.sh fails you you need to do these steps and and actually since you asked that question i'm going to modify this blog post and actually put that explicitly in here too because that is something you will run into uh even even with the good releases that that um successfully build every once in a while you'll crash into a race condition um i've hit this a few times when you tear down the bootstrap node there is occasionally some sort of a race condition in there that something gets messed up when the bootstrap gets ripped out and the install can't so it will crash out or it will run through its full time and then it will time out after 40 minutes so so what you do um and i've i've got um shell commands here drop into a shell script in fact i have it in a shell script that i just run that's basically a cleanup what it does is it finds the virtual machine um that starts with drc dash finds the the the the machine network and the pool and it does a destroy undefined on all of those for the bootstrap and the master so that cleans up the libvert resources and then the last thing is wipe the images that are sending out there uh which it in an opinionated way it puts in open shift images under var lib uh libvert and then the last thing is removing the crc stuff from the s and c directory that was created uh during the s and c build or the create disk and if you do this then it's claimed now you can run uh s and c dot s h excellent and we've got a couple more questions uh filtering in here um will this work on fedora 34 george wants to know and i'm assuming that means the build process i'm gonna say itch uh i haven't made it on um actually i'll i'll i'll confess i haven't tried it on any fedora releases uh i run sent us stream on you're saying um name on me but i run sent us stream on everything because treat it all like a data center right uh but that would be something for the community to check out i think is to give it a shot on fedora 34 and give it a shot on fedora core os like it would be really interesting to actually do that on f-con definitely so that's something to explore well we can actually talk about that at the next working group meeting um see if we can get volunteers next year i'll volunteer to to try it on f cos yeah well one thing you'd need to do to to run it on well if you're running f cos i'm bare metal you wouldn't need to do this but if you're gonna run it on f cos containerized um you you will need to inject into your inject your ignition um enabling the nested vert that would be cool yes all right let's see that we have here uh did i miss any um a little bit more about uh there was a little bit more uh a desire for more clarity on operators uh so operators that have red hat rpms need the official pull secret operators that um any other operators can be installed uh with the fake um uh pull secret so i hope that clarifies it we don't actually have like a generated list where you could compare the two um that wouldn't be too hard though maybe we could automate a a script that actually does that but there's a significant amount of uh of difference in operators between using the fake pull secret and using an official red hat one and i think that's all that have got the questions that have come in um i want to take this moment to talk a little bit about um something that charo touched on which is sort of the impetus for having this streaming session was obviously to to spread the knowledge and to have folks familiar with this process but in particular to get people volunteering to help the working group to um to have trc updated and available uh on a continuous basis um uh charo is a red hat employee and is really busy um and has graciously provided all of this but we can't rely on um on that and i think there's benefit to multiple people working on a project not just in terms of time and resources but innovation possibilities for innovation possibilities for building out um some ci and that's something else that that's really needed here is is um it doesn't take a lot to script this out into um the ci uh into a pipeline and so the working group will be looking at this in the coming weeks and it'd be nice to have folks who have been on this call or who are going to be watching the video reach out to us at the working group um uh diane can post our our uh uh information in the chat um if you're not familiar with it you're watching us on one of the the streaming platforms or just happen to do it on youtube uh get in touch with us uh come to our meetings um we really do want folks to contribute to this so that for example as charo pointed out building with the nightlies like let's really test this so that is 4849 and into the future that okd uh code ready containers is working known to be working and available uh for the latest releases so um please do uh get in touch with us and this video will be archived and we'll be talking about what came out of this meeting at the working group session and also we'll be improving the documentation um we'll be taking what charo does and then constantly building on it and our goal is whoever contributes to this can also contribute to the documentation so that it can be easily handed off to other community members uh into the future um i think that's diane do you have anything else you wanted to add well i just wanted to put out there one um i put the link to the okd um working group google forum um and you can subscribe to that and you'll get notifications of the um upcoming events we usually meet on tuesdays um at nine a.m specific standard time noon eastern time you can find all the details on okd.io and that's um you know a great place to do that we would love to have folks who are watching this who are participating thank you all for your questions um to you know take a look at the up you know paddling upstream blog without a paddle blog and the links to here and if you want to post that url again in the chat that would be great um i did earlier in the in the session but we'd love to have you um test this out give us your feedback let charo know what's missing and make some official documentation um outside of the blog um on the the okd.io site for this build process um that to come out of here as well you know anything that's missing from this process i mean there's a couple of questions about docker compose and stuff all of those are opportunities for you to contribute to this project and there's more than enough folks to help um coach you and mentor you um in creating that content and if that's you know your preference we we'd love to have work with you and uh make that happen i'm gonna get that documentation up um and available so really um take a look at okd.io um wander around the blog post test it out and um join us on one of the okd working group calls um coming up soon real quick if we've got a couple more minutes there's one more question that uh popped in on the chat about the the resource requirements um do we have a few more minutes oh we get as much time as you need oh oh all right uh so uh carlo santana yeah love your guitar playing carlos please do the question um i will admit i like joce hatriani but you're still surfing at the end yeah there you go anyway anyway yes so so crc um is admittedly uh it's phat because it's awesome but it's also f at because it requires resources on your machine this is also an area where i think the the community could help out um because obviously the vast majority of the engineering resources on open shift are focused on data center um and so what what because code ready containers is a single node cluster it's a single node cluster that's being built still from all of the pieces that are more or less unmodified from how they would expect to operate in a data center and i have a hype that i think the community could dig into that if we took crc running um and prove into it a bit find the worst offenders because they're operators right we can get in there find the find the operators that are taking up the most resources dig into those operators a bit because again all of their code is also out on um github my hypothesis is that it's really just a matter of some configuration around some of those operators because they're they're currently sized to handle cloud scale workloads and on your laptop they don't need to be so if they have quota set you know minimum quota set for resources or something those may not need to be as um unerous for running on your laptop my hypothesis is that if the community rallied around that a bit we could get in there figure out some tuning that can be done uh that isn't obvious from just a yaml config file or something right because because all of these operators they're built cloud scale so a lot of what the resources that they're asking for and things aren't necessarily exposed as something you can figure outside of the operating so it's going to take a little work but yeah i would love get in and crank this down a bit see for two reasons one because it would be nice to be able to run it on a laptop that didn't have to have 32 giga ram but also uh because oh yeah as um as the arm support starts to get out it'd be really cool to be able to run it on one of these little guys with 8 giga ram right so yes i would love to get into that and make this thing run a little slimmer uh because i'd also like to have a openshift cluster running on some 8 giga pies oh yes that would be sweet so we do have another question that came in once you're done with crc how do you adding worker nodes or additional master well that's the thing is crc is meant to be minimal yeah um you know i'm going to hypothesize a bit here this on this screen this this is a single node cluster you can add worker nodes to a single node cluster um i don't know of any reason at all why you couldn't take you couldn't just stop with running s and c dot sh not tear it down and create the q-cow image just run the the single node cluster here then um create with some with libvert some additional worker nodes no reason at all that i can think of why you wouldn't be able to do that and and this is this is a very quick way to get an opinionated uh single node cluster up and running you need to put a h a proxy in front of it uh because it is you remember we created the the firewall in the zone and everything and it is listening on internal network uh a 168 uh network right here it is listening on this 168 122 um slash 24 network so to to get to it from off your workstation you need to throw h a proxy in front of it and let h a proxy with a couple of virtual nicks uh go from the the hidden uh 168 122 one network to whatever your your home network is so you could get to the cluster but yeah single node cluster be doable all right i think that's all of the questions that we have here uh we're just about to the hour which i think would be a good time to stop oh we got something coming in let's see uh with the new 4.8 edge bootable single node cluster what's the difference with crc is the bootable iso smaller footprints so in other words what's the what's the comparison here with the 4.8 edge bootable single node i don't know yet because i haven't built one for okd but my my hypothesis there because the the bootstrap node does does not live in the resulting kukau image is that that in and of itself isn't changing the footprint of crc that that crc will will still have this the same footprint what is changing is the the creation of that single node cluster process is no longer going to require the additional resources on your machine beat that bootstrap node and then tear it down that it will be a more streamlined um stand-up of that single node cluster excellent and any other questions well looks like we're good let's hope that out of all this effort that charo has put into um getting the blog and walking us through this all um a few of you out there will be enticed to volunteer to start um helping out building these um and especially if someone is so inclined to do it for the nightlies and has some resources automate that on that would be super super awesome um and um so please do join us um or with jamie and i and the rest of the okd working group members um next week and um we'll probably discuss what we learned here and get your feedback there but um also we'll be posting this up on youtube it will be the live unedited version will be there almost immediately once we stop talking here so um jamie any final words charo any final words here come to the working group session yeah because i'm never able to because i have hashtag day job meetings but sometimes i get to come yeah again yeah encouraging people to show up at the working group meetings and they're very um casual and and they're fun actually because folks get to talk about what they're experimenting with what they'd like to see and um and their projects all different kinds of projects if you're interested in something other than building code ready containers we have a variety of other projects that could use some helping hands and there's always somebody willing to talk with you um and and share their stories too so um i put the link in for the upcoming ok okd right open shift commons gathering at kubcon which is going to be hybrid so it'll be virtual and in person and um john fortin who is an amazing um contributor to the the conversations and the workloads around okd is going to share the market america shop dot com um production case study for okd um with us so i'm psyched to hear that story uh we've got lots of people who are using okd in lots of variations of ways not just for your home lab anymore um so uh take a take a risk join the call the call to action and um check out the crc bill process we'd love to hear your feedback excellent all right thanks again um both you guys for um making this happen much appreciate it awesome all right everybody that's what's your message let's let's see if we can coerce a few more people into doing this um and maybe find someone to put some resources to deploy it and make it automated have it absolutely all right see you all on tuesday all right take care take care bye