 Yeah, and hello everybody if you're listening and you can see this see us on the stage Well, we'll be starting at the top of the hour. We're live now and we're just getting our sounds and everything else adjusted and we're gonna give everybody about five minutes to come in and Hang out and so hopefully you can all see us and hang out with us for the day and we're really excited to do this first Broadcast of the okd testing and deployment workshop, and so we're really glad that you've joined us and We've got myself. I'm Diane Mueller that even what coffee who's from calling in from Bruno and Charo Gruver and where are you Charo in the universe, North Carolina? Virginia Roanoke, Virginia and Jamie Magria is with us from the University of Michigan And there are a number of other guest speakers here today from who will introduce themselves as they as they come along And we're just excited to try out this new platform So bear with us if we have slight technical difficulties. This is a new one We used it for dev comp CZ and with good success And we have some really interesting Deployment configurations that we wanted to share with you and we really want to hear about your deployment Adventures and requests so this is going to be some some fun today Just see the event and and I'm just gonna go in I'm in the event chat right now So if you can see us all four of us Shree, can you just make sure? Let me know that you can see all four of us Um Yep. All right, perfect So now we can't say anything bad about any of our friends and family, but we never would do that anyways Hey ever so we're gonna wait until just the top of the hour because we did say this thing was gonna start At nine o'clock and we're gonna do some dapping and and hang out and so all four now perfect Jump be dropped out for a moment. Perfect. All right And while you are getting ready if you haven't already joined the mailing list I'm going to pop that mailing list here into Into the chat so you have something to do while you're watching us This is if you want to get on other events come to the working group meetings that's the link to go to And I would be remiss if I didn't also invite you to join OpenShift Commons So I'm going to grab that and throw that into the chat as well Here you go with that one. So if you haven't joined OpenShift Commons you can go there and do that All stages Kareem are going to be recorded everything we say and do today will be recorded Yes, including the sessions So and I will upload them as soon as hop in which is the platform we're using gives them to me Up to the YouTube channel So you'll be able to watch them and play them back and as such will also And I'll say this again in two minutes. I think one minute Then I'm and we'll we'll show you where where you can find things and Gigs started So I'm going to share my screen now guys and We're going to rock and roll and see if I can do this all sharing screen and All right You should all see the okd working group Site here okd.io And so if you're joining me, I'm assuming everybody can hear me I cannot see the chat right now Jamie. So I'm in full screen mode. So I'm just going to motor on here and Expect someone to interrupt me if something goes south. So Jamie's my co-organizer Jamie Magria from University of Michigan has kindly Offered and is the back in the background here with me today. So I'm totally grateful for Jamie's support If you don't know me I'm Diane Mueller. I'm the director of community development here at Red Hat for the cloud platform group and one of the co-chairs of the Okd working group And so a little bit about the logistics today We are using a new platform hop in and so you have the ability to ask questions in the chat And we have the ability to share our screens. We've set this up. So there's two hours in the morning That we're going to walk through what I'm telling you now a little bit about what is okd Charo Groover who is formally with Old Dominion Freight is now a red-hatter. Thank you for joining us Charo and Then we're going to get a walk-through of the okd for release process and a bit of a tour of github and where Resources are that we're going to be using In the release processes by Dean Rutkowski And he's a red-hatter and an engineer here and then we're going to get a walk-through of Installed deployment to vSphere using the UPI approach by Jamie And we're not actually going to do it We're going to just walk through the bits and parts of it So you everyone in all of the other sessions can see where they can find the resources and then we're going to bring on Joseph Meyer from Rodeo shorts who's been an active community member as well and One of the great folks who's been working on the okd.io site and who has just turned on the blogging Capabilities and so someone asked in the chat already where Things would be after this and so I will post a blog on okd.io with links to all of the YouTube videos As well as to our Google group I'll send a note there as well afterwards with all any slides any YouTube videos for all of this and then We're going to ask you to divide and conquer yourselves into the breakout sessions and we have four of them set up today We're going to do the deep dive into vSphere you on UPI and Jamie and Joseph are going to lead that And then we do have hopefully It's not TBD's I'm typo on my part Andrew Sullivan and Justin Pittman from Red Hat are going to walk through the bare metal UPI with you guys and then Charo is going to lead the single-node cluster deployment workshop With Bruce link from BC IT the British Columbia IT Group IT Community College which is awesome to have Bruce with us today and then There's going to be an interesting one in the home lab set up So many of you might have read Craig Robertson's Home lab blog post so he's going to walk through a bit of that and then Sri Raman Jam From dado is going to walk through his version of that and after each of these little walk-throughs There will be Q&A via chat And we are going to answer your questions We're going to ask you about if you have a home lab how you're setting it up if you have bare metal What difference you did and the whole all of the details for all of these things are in a set of Documentations that are living in Mike McEwen who's also in the background here in the chat And he has graciously set it up in his repo and we're going to be trying to get you to do log issues against the docs And make a pull request or if you have another home lab or a different approach to something To make a request to add that documentation And so we're trying to fill out our install and deployment Documentation so my end goal here is to get all of you to participate in the working group and to help us drive our documentation updates So Let's go forward one more. So today like I said in the hop-in chat, please ask your questions and the moderators myself Vadim and Jamie and others will be popping around in each of the sessions We're all empowered to do that and then we will try and relay them to the folks who are asking delivering the content and Get them answered and then we'll just do some deep dives into this so that Elmiko repo that you see up there and I post I'll post it into the chat after I stop sharing these slides that's where the The working drafts of the okd deployment configuration guides live that we're going to be using today The okd docs live in docs.okd.io and after today We're going to hope that all of you join if you haven't already the Google group and come to our meetings If you haven't found them already in the kubernetes slack channels There's two other channels openshift dev and openshift users where you can ask questions and as always We would love it if you joined openshift commons and openshift commons is Organizational based so if you go to the participants list and you see your company's name already there your company has already joined And I can just add you in and add you to the slack channels there as well easily And if your name company's name isn't there just click on the join form and fill it out And we will get you rocking and rolling so with that I'm going to stop talking and I never stopped talking but I'm going to stop sharing and We're going to switch over and get charo grouver exit and see if I did that right and so Now charo who is going to stun us with his what is okd? So take it away charo And you're almost in full screen. There you go. All right Well, thank you everybody for joining us today and good morning. Good afternoon and good evening wherever you are in the world Welcome to the testing and deployment workshop. I'm going to give you a quick overview of Okd We're going to start with an overview talk a little bit about where we are current state of okd spend a little bit of time talking about operators and the operator hub and then finally Leave you with a whole bunch of links and references so for how you can get in contact with us So okd. What is it? It is a community distribution of kubernetes, but it's actually more than that It's a community distribution of kubernetes that is actually built off of the open shift code base So it's it is the same code that you are running in your data center or on your cloud provider If you are a red hat open shift subscriber This code base is what we build okd from with with very little if any variation The only variation really is the fact that rather than running on the red hat coro s We're running on the upstream of red hat coro s, which is the fedora coro s So you've got the open shift code base with fedora coro s as the underlying operating system And as Diane mentioned earlier, you can reach us at okd.io To find documentation reference materials and Code ready containers built from okd, which I'll talk about here in just a minute So like I said, this is a community distribution means it is built and supported by a community some of whom you're seeing on your screen right now the the whole Tremors behind open shift as sort of a kubernetes plus distribution is that automation is king automation for installation automation for patching and updates and automation for resiliency and recovery in your data center or your cloud platform the Like any other kubernetes distribution its heart and soul is really the orchestration of applications and services that provide value for your business or your organization Underlying that is base kubernetes The the platform the cluster management the security the monitoring embedded registry everything that you expect from from a kubernetes distribution with a twist of additional Automation provided by the the open shift plus plus As I mentioned fedora core OS is the underlying operating system on which the whole thing is built and fedora core OS itself brings a lot of what provides the automation and the real resiliency to Okd as a kubernetes distribution You can run this on just about any platform you can imagine We're not quite to arm 64 But you can bet there are people who care about arm 64 and would love to see this thing running on the edge so right now all of the major cloud platforms Amazon web services Microsoft Azure GCP the Google cloud platform you can run this on OpenStack You can run it on overt and as some folks are going to demonstrate today You can run it on vmware and bare metal So let's talk a bit about where we are today We've actually come quite a long ways since the major shift a couple years ago from Okd 3 dot 11 to the open shift 4 that okd 4 is built on We are currently in our 4.7 release with 4.8 not too far out on the horizon We've been taking a lot of community contributions That are improving this platform and making it much richer as we continue to evolve We've got active collaboration established now between both the fedora communities for our underlying operating system as well as folks who are contributing to the operator hub Which we'll talk a bit more about here in a couple of slides There are quite a number of bespoke operators that are now available for okd to provide All kinds of value add for your clusters And one of the things that this platform really allows you to do outside of like a subscription based open shift is enabling early adoption of upcoming technologies Especially with the underlying fedora core OS you get a preview of of what's coming down the road Which sometimes can bring an extra level of excitement But always brings an extra level of functionality that you can take advantage of And we've also just recently I was probably six months ago. I believe it was this summer We we got code ready containers Finally released for okd for code ready containers For okd is based on the code ready containers code base For open ship. So so just like everything else that we do With red hat it's all open source. It all accepts community contributions It enables you to run a single node open shift cluster on your laptop or workstation So it gives you all the goodness that you get with an okd for cluster and fedora core os On your local workstation. You just have to add your code One quick note on that the the code ready containers release that is currently Hanging off of our okd.io site is still built on 4.6 If you're watching a recording of this video, it's very likely that the 4.7 release is out So those of you who are watching us live Look for the 4.7 of code ready containers to to be available here Within the next week or two. Hopefully within the next couple of days But demon i are working on a couple of things that that we got to get in place so that The build will run off of some fairly significant changes to the underlying fedora core os and a couple of the operators That we need to update the the code base to support crc So look for that to come very soon operators operators are what make okd run In fact the very first thing that happens when you're bootstrapping an okd cluster is an operator is taking control To coordinate the rest of the installation operators provide infrastructure as code with intelligence behind them That monitors the state of the resources that the operator owns And is responsible for ensuring the stability the resiliency as well as the patching and updating So operators are like a bundled system administrator That is always with you always watching the application and ensuring that it continues to run The core of okd Is built on operators So everything that provides the functionality from etsy d up is is controlled by operators But operators also bring value add so if you need rook sef as a storage provisioner Well, there's an operator for that your internal image registry is an operator if you need a kafka cluster There's a stream z operator If you need a service mesh, well, there's an operator for that too. So operators are a way to bundle the capabilities that give your applications The the added richness resiliency and capability that you need so that as a provider of software solutions All you have to focus on is your code and let the operator take care of everything else The operator hub is where you can go to to retrieve these and install them into your local cluster When you have a cluster up and running the operator hub will be there and from the console You can navigate to the operator hub and go shopping when you stand up an okd for cluster All of the operators that are available. Oh, you will be able to install Free of charge. There are not subscription based operators In there, they will be the community supported versions of the operators that you would see if you had a subscription based Cluster, so if you need grafana or stremzy or service mesh with istio They will all be there and installable from the operator hub now another quick caveat on that we are still working with several of the The operator providers to ensure that the community version of the operator is available in operator hub So some of them you do still have to go to the the github repo or Wherever the operator lives to get the installation materials for it But as we continue to evolve this ecosystem more and more of those operators will be there And when you get your cluster up and running you'll see that there's already a very very rich set of operators available Finally, we'd like to invite you to come join us. This is a community driven ecosystem We're very active the okd working group and the fedora core os working group We have several members actually that participate in both To a greater or lesser extent You can find the okd working group at the following links Which i'll leave up on your screen for for just a few seconds so you can screen grab or Type down whatever you need to do to find us We also have a calendar that will give you access to our bi-weekly meetings. They are open So please come and join us And the same thing goes for the fedora core os working group that is available at these links And again, please come and join us if you're interested in edge networking if you're interested in seeing arm 64 or if you're interested in seeing this run on other metal platforms Come and join us and contribute your knowledge your skills and your time to our efforts We are a community led organization And the final thing i'll leave you with is a list of links where you can Access our resources and come and talk to us And with that I'll say thank you And i'll turn it back over All right. Well next up we have vadim rotowski Dialing in from bruno, and he's going to walk us through The current release process and tell us a little bit about where things live in github So vadim you want to share your screen and take it away Hello, my name is vadim and I work for ad head and my Day job is being an engineer for open shift, but In the evening selected tinker with okd and other community distributions. So Today we'll take a guide on about how okd gets built where things happening Where's the source and uh, why do we need such a complex ci to make it happen? So our final step in the release process is uploading the binaries of installer and oc to github, but in order to make it happen um, we first need to build it from the source code and as everything else things are happening in The organization called OpenShift where the code for both ocp and ocd being created for those who are not familiar with uh, semi crimes ocp stands for open shift container platform. That's a product which redhead officially supports provides Subscription to the clients and ocd is a community distribution Which is related to ocp as well So let's have a look at one single simple repo, which is called origin branding Um, it contains several simple things First of all is the docker file because all pieces of um, okd and ocp are container images So this The docker file explains how to build it. We're building Uh, from scratch then we copy the files in the manifest And label this image as an operator. That means Um, as Charo has explained previously everything in ocp is based on operators So we have a top level operator called cluster version operator, which all it does is applies all the pieces um To your cluster and they assemble into an open shift release So the manifest we're applying is in fact a simple config map Which instructs the console to use ocd branding and set a different base url Uh When the open shift console is started and it finds this configuration it applies the ocd branding And in the help message you would have different documentation base url So all of those changes we're looking into are built by ci. You can see we get green marks meeting in asbest And everything is done via the photo quests For instance, here's the output of our ci. We're using um, Kubernetes testing for project called prow in order to build assemble And manage our images created by ci For instance here, we can see that we're using Origin 4.8 image stream as a base Because all the images we make release from are in fact stored in the shift itself And they are stored in the form of the image stream tags So we use 4.8 as a base build branding image Tag it back into the temporary stable image stream and We don't run a particular test because it's just one single config map And we immediately promote it back to 4.8 image stream And we promote just this single part So that instructs ci to Build the image we have submitted Run some tests if they are present for instance in other repos who might have additional end-to-end notifications for AWS Same test for AWS, but an upgrade test Using previous date and a new release with this image gcp vSphere and so on and so forth depending on what kind of repo that is And once we're done it would promote it as an official image in the 4.8 so The whole release part is stored as a part of image stream in our ci and once we're done The ci would also track that image stream and would be building U releases out of that meaning You would be able it would be able to compile them into one single image which refers to a bunch of other images and Fetch some metadata from them for instance if we would use oc add and release info command on some release images We would be able to extract URLs to each particular comets. This image has been built from As you can see on this picture for instance and Since ci can do that so that users can also create their own release payloads Based on the payloads. We release already replacing some particular images with the fixes They would like to test or some changes. They would like to to have performed Another important part in this is that okd is sharing a lot of images with ocp itself meaning Using the project called red hat universal base image We are now able to officially release Images which are built on the red hat enterprise linux back in three 11 days okd used to be based on top of centOS but now We can use the very same image based on ubi And that very same image can be used Simultaneously in okd and ocp without any branding or legal problems And that allows us ci to stop doing a duplicate work and Once the comet lands in the branch. We are able to build an image promoted to ocp and at the same time promote it to okd for instance This script would show us that Our coordinates image in the latest Origin or so-called okd payload Is the same coordinates image as in one of the releases of ocp We're fetching the pulse back for this image and Once we use command oc image info to display information about labels layers environment Variables and so on and if we if the output from this image the only difference is just name They are being pushed all the rest is the same because these are the same images next the most important part is our release controller page where That's the front end of our ci which detects that when a new release lands in A new image lands in the image stream. We can prepare a new release. Let's look at something more greenish like this one I built the difference between the previous release shows that two new images have landed Based on the metadata they contain We can also build links to the pull request which have caused this change And so on so forth and these nightly images Can eventually be promote to stable for instance This latest four seven release used to be a nightly release with the same date And in order to perform a stable release, we have a small instruction What to do it basically boils down to mirroring the image to quay running some additional tests and Tagging it in the state for stable channel All the rest is done by ci itself, which automatically Updates the update graph runs additional tests and we can see that users from the previous releases can upgrade And what's the best result for that? Um Since the okady is slightly Different from okady or ocp as it uses different images in some cases We also have a different issue tracker. We use open shift Issues to track okady specific problems. However, since most of the um Images we're reusing from ocp for instance console Is copied as is from ocp. So any kind of ui issue you're hitting would be reproducible on ocp as well That means you would file a proper ocp image because you you would be sure that it happens for ocp as well and he would get direct developer attention to to fix it And um once we're there we also request you to Um, let me pick one of my favorite ones Speak the closed. Uh, we also ask you to provide A log bundle so called where a lot of logs from the failed installation or broken cluster itself That archive should contain all the logs for us to find out what's happening Which part doing it fix or what's missing? um And after we're done, we're also uploading the client tools to GitHub and send a message to ocp working group um To reiterate in the end Okady is a community distribution that means all the images we're listing here Have their github prepos meaning you can reveal them think of them replace some parts and Collaborate with us to make it better Uh, we make decisions on the work group calls. Uh, those are recorded by dian and release to our youtube channel Um, ok d4 is no longer an upstream need stream or downstream of ocp4 It's shares a lot of images with it, but still has its own uh special replaced parts So we coined the term sibling distributions for this uh kind of event and We heavily rely on automation and ci verification and also user feedback when We are releasing uh, okady And that concludes my demo. Thank you awesome Thank you. Vadim and in a you know, I had someone mention I think Just for future reference when um, you have a black background and you're showing something It's a little blurry in the refresh rate. So maybe a bigger font size next time We're all learning here, too. So, uh, we'll figure it out. But um, that was great. So if you have questions, um, folks Please ask them in the chat Where we're running pretty much on time maybe a little ahead of time even which is great My theory Is that this whole Upfront section which is going to set us up for the sessions will take two hours We may have a little extra time at the end In which we will all go grab coffee more water or and some of us will stay online and answer your questions if you have them and um, then on the From my i'm on pacific standard time at 11 o'clock We will switch in and the sessions will go live and you will be able to join that So if you're wondering, um, I can't advance the sessions to start earlier With hop in so they will start on time, especially if people are coming just for the sessions So we don't want to start them early. So um with that Jamie i'm going to bow out because we can only have four faces on stage at the time And let you do your talk and bring joseph in Because he's got the talk after you and then um, we will kick charo off I think I just picked you um when joseph finishes and i'll come back in so just so everybody knows This is this is the logistics of the day. So, um, Jamie if you would like to to share the screen and um, give us a tour of the documentation Um, we'll queue you up and take it away Okay, great. Well, thank you very much folks for uh Joining this overall Community gathering uh, and for joining the sessions later for folks that are just tuning in. Uh, there will be sessions Uh, broken up four different sessions actually for the different types of of Installations basically that you can do so i'm going to be providing. Uh, right now is just a quick walkthrough of a Installation on vSphere with user provisioned infrastructure And what does that mean? Well, so user provisioned infrastructure means that um Instead of the installer Configuring a load balancer within vSphere Or uh, configuring the ip numbers or any of that That this is all done with infrastructure that the user provides on the outside before they run the installer And so the prerequisites for that Uh, are basically handling dns dhcp load balancer and optionally a proxy And joseph is going to get more into the details of of doing these specific things, but in short You're going to need Uh, some dns entries for the bootstrap machine You're going to need, uh, three entries for your master nodes, uh, openshift clusters Right now support, uh, three Master nodes in the control plane as it's called And then you're going to need an entry for each of the desired workers Uh, and also an entry for your endpoint and for your Your api endpoint and your api internal Endpoint that the nodes use to connect with each other and then a wildcard entry a wildcard dns entry of the form Like this So that once you've deployed apps on there by default they would have the app name dot apps dot cluster name at your domain and To give you an example of that so For user provisioned infrastructure on my end I'm utilizing the dns That is provided at the university of michigan, which is a system called blue cat running on proteas And so this is a way of Very easily configuring dns Uh and dhcp and so you can see for my demonstration cluster that you'll see more of in in the session that i'm doing Um, uh, basically you can set up your dns and this is what it would look like, right? So you've got your masters, uh, and your worker nodes, uh, at set ip's And dcp, um, I didn't feel in feeling the details there. This is something that, um You'll want to do In most cases so the way open shift clusters work You can do static ip's or you can do dhcp But you cannot do both. Uh, and this is, um Uh, let's see if I can find it here. Um Don't have it in front of me, but uh, basically, um when you Have chosen to go one route you can't go back to the other So if you're going to do static ip's you can do that by setting some kernel parameters with something called afterburn Uh, and this is something that you pass to in the configuration of your nodes. Um, you pass a string a configuration string That's handed to the kernel with your static ip Um, or you can rely on just the dhcp on your network and any address that's handed out Um, alternately, uh, I took a third path and I'll get into more details in my session. Um of Using reserve dhcp and setting the mac addresses on the nodes and there's some advantages to that For upi that i'll talk about and, um, you're also going to need a load balancer Outside so that incoming requests to the api and the ingress Get passed to the um respective machines, right so, uh in terms of a Load balancing we've got a load balancing proxy proxy That Is called a big ip from f5 networks And so, uh, in my configuration, I use a big ip which allows you to define pools of machines. Uh, so here you can see, uh, this is the api pool Uh, and this is the worker pool and so this will load balancer requests to their respective pools and There's also some one thing you don't see here, but i'll be showing in more detail is you can also do some checks So for those of you that are familiar with the internals of kubernetes, you know that there are health z and ready z rest calls that you can make to get the status of Of your cluster of your nodes in your cluster and In the f5 you can actually define those types of checks as well So be performing those checks externally an advantage of this is that if you If your entire cluster goes down and the internal notifications aren't working You have an external source of notification and monitoring To see that and i'll get into more details of that in the other session Another thing that you would need is a proxy If you're going to be on a private network. So this is something that Open shift has been growing into when it was originally In versions 3 I should say unless there wasn't as much focus or support for Private networks and that's been increasing but if you're going to be doing A private network you will need a load balancer for or a proxy for your Calls out of your containers Once you have your cluster up but also for the installation processes Well pulling down those containers that are part of the installation process So in terms of a proxy you can use squid Squid is a freely available Proxy that is very easy to set up and has a simple configuration file And i'll be providing some examples of that in the session that I have And if you look at The documentation on the okd website There is a link to Installing and then subsections And so here is the section Installing on vSphere and then there's Another subsection under that Installing on vSphere with user provisioned infrastructure And that is What i've been working with and this has a lot of Great information. I would encourage folks when they're trying Either Using the standard install or the user provision to install functionality Either one check out the upi documentation for the platform that you're using The reason that I suggest that is the upi documentation Shows you some of the things that are needed and some of the underlying details Of a open shift install and it can be really helpful for understanding overall how the process works And it's sectionized Quite well and shows you what you'll need in terms of your nodes And about creating the user provisioned infrastructure And ports that you'll need and whatnot so definitely check this documentation out and One of the things that they've done is they've broken it out into several sections With the more levels of detail that you want to Control the higher resolution of detail that you want to control in your install So there's a section Installing a cluster on vSphere with user provisioned infrastructure and network customizations And so that one for example, uh, we'll provide you details About setting static IPs and disk partitioning and some of the other stuff That is more A higher resolution of manipulation of the install process And uh the install Usually takes about About 30 to 40 minutes And in the session that i'm doing i'll be talking about how you can automate that process Literally to be able to just run a script and configure Generate the necessary install files and whatnot and load those Into newly created VMs And then kick off the open shift installer so that you you get a Very near to Non-upi installation experience and actually some extras Let me bounce over here to to provide an overview of Some of the files that are involved in a upi installation So what you'll see is After you've generated what are called ignition config files You'll see Bootstrap ignition config and ignition is the Um Basically the metadata that's used that you put into the metadata of the node To tell it to connect to the bootstrap server or in the case of workers to connect to the control plane To download the necessary components to join the cluster And so you'll see multiple ignition configs for the bootstrap for the master for the worker After you've run the installation there are some hidden files An install log and a state file though that says the state of the cluster Now there's one thing that I want to point out For upi installations That is true across the board And it's something that sometimes surprises folks is that the Installation the open shift install binary Actually ingests and deletes Your install config so you'll have like a general install config that you'll Configure the parameters for for your cluster and They reference that In the documentation what you need to have in that One thing that happens though is when you run the installer It actually eats that file up. So you'll want to always make a backup of it I'm trying to find an example of it here. You'll always want to make a here we go. You'll always want to make a backup of it So that you can control Or so that you can duplicate the process Again without having to do a lot of work And the tool that i'll be demonstrating that I wrote Actually allows you to have a template and then it duplicates that template and then goes from there So you don't have to do anything by hand And that is the overall process of Installing with vSphere Basically, you generate your files and You deploy your infrastructure You generate your files and Then you create the nodes with the metadata from those ignition config files and then you run the installer So that's a general overview And again if you want more specifics of that and you want to see an automated example of that Then please check out the session that i'll be hosting with joseph And with that i'm going to Stop sharing myself And then we'll move on All right, and we have successfully brought joseph mayer into the fold here today And we are running pretty good timing here So i'm going to let joseph talk about some dns dchp Issues and best practices. So joseph you want to try turning unmuting yourself? And sharing your screen Yep Hello, i'm joseph mayer. I'm from rodie and schwarz. It's a german unique based company um I'm working with okd since more than three years now. We started with okd 310 and Yeah Moves the road over okd 311 to okd 4 currently we are there um okd helped us a lot in Yeah, my company a lot in Getting in touch with kubernetes and gaining the skills for that because kubernetes vanilla kubernetes is not um, yeah an easy easy thing and uh We uh, yeah, we we used okd to learn that all and now we are in a stage where we move parts of our um kubernetes clusters to open shift For having more production loads And getting the support from redhead Yeah, and now I try to show you a little bit about my first Or what I thought are the the heaviest steps in the beginning um, if you start with um user provisioned infrastructure and with dns dhcp and an external load balancer um Can you see my my screen share? I hope so you Yeah Thank you um, this is a diagram of my home lab. I'm running here. Um at home I'm using uh, I'm using uh The mvr vSphere. I bought a license um for that's very suitable for home lab users. It's It costs around 150 bucks It's called um the mvr user group advantage edition um, you have to pay 150 euros for One year license I think that's pretty affordable for home labs Why do I use um vSphere at home? because I like to have an environment here in my home lab sets similar to So one I use in my company And Yeah, so that's why I'm using the mvr vSphere. It's running on our on our rise in pc. It's a very Capable one. It has 16 physical cores and 32 With multi-threading enabled You don't need that much cores. Don't be frightened of that But I like um to have possibilities To yeah to add more workers to Play with new things One thing I tried at home is virtual an open shift virtualization Based on kupert and this requires a little bit more horsepower Say you normally have on your desk or on your laptop So we have a um one pc for vm There vSphere I have another computer running And the book attached storage. I'm using trunas course the community edition But this year trunas scale will go into general availability I like said because this one will have a small Kubernetes cluster running on it where you can deploy your home charts and I like to have some components outside of my okd cluster because I am constantly deleting and creating clusters to test new things and I Yeah, I like to have the possibility to have some components outside of my cluster Yes, and on the top of the image you see my dns thcp server and also the load balancer Um component. It's running on a raspberry pi. Maybe you ask yourself why I'm not running on helper vm in my vSphere environment. I'm doing that because I also constantly Deconstructing my vSphere environment to test things and I'm also using the dns and thcp server for yeah for my home Environment not only for my home lab. So I need something that's running all the time And the raspberry is pretty fine for that So, um, yes, and I have a dsl modem router that's connected to the internet And um, the first thing you should know um If you want to set up a dns and the hcp server at home, you should um be sure that no other Of this server is running in your in your subnet In my case, I had to turn off the dhcp server And the dns server running in my in my um dsl router before well, I have to say in between because For sure during the installation you need um internet access you need dhcp dns server if things go wrong um, but um if The custom build um servers are running you don't need the Yeah servers in the fruits box anymore What do you have to achieve here? um on the the mvr vSphere um server there are a few vms Created during the installation process Just for your information. I use the instructions Um from as a github repository of okd. It's located in github.com open shift okd There are some guides also one for upi vSphere And this guide one of these guides uses a terraform That's a tool that can take about infrastructure uses a domain specific language for that and There is also terraform provider available for vSphere I have seven vms one bootstrap vm three masters and three workers You don't need so much vms No, it's normally I have this is my standard setup. I don't want to Yeah, um take care about With limited cpu and memory space. I want to go And have fun That's because it's seven vms In the first step in the installation is set. Um the bootstrap vm starts creating a fake control plane Um this takes only I think a few minutes. It depends on how fast your internet speed is If you have a local registry in your home labs and things can be faster Because of improved network speeds normally have in your home lab The second step is that In addition to the bootstrap node You have your master nodes. The master nodes are constantly I'm using the load balancer that's running on the raspberry to um get Say ignition configuration files from the bootstrap node And if the bootstrap node is In a later stage of a stage of its installation It will provide with a local web server the signature file to all the masters that are They are constantly calling for that If they get this uh ignition files from the bootstrap vm They are provisioning themselves Say normally boot twice a boot once, sorry um in minimum um to um boot into a new version of your operating system and you um fit on a course version because you start initially you start with uh Yeah, this uh, um vm template that's run that's stored in vSphere and uh beginning from this Um operating system version say vm's are running Waiting for the ignition file say get it fetch a new or the say os version that's uh um Determined for a certain okd release They are booting into the new os version and join the fake um control plan that's Created um that's running on the bootstrap node if the control plan is running Then in the next step the bootstrap load um node stops serving an ignition file The load balancer Will see that and turn off the bootstrap communication In this phase you could in theory um delete your bootstrap vm because you don't need it anymore Then um The worker vm's they are running all the time and also um fetching ignition file for the workers from It's this time the control plan that's uh running um with our masters They are constantly polling for that And if the master control as a control plan is set up um Again a web server will serve the ignition file for the workers The workers will fetch the ignition files Load the current version of fedora chorus boot into it and finish the installation And afterwards you have a running okd cluster So to achieve that um said you have a load balancer and dns dhcp server you have to set up a little bit in advance I created um some documentation about this process Don't get frightened. It's uh It's uh lots of text I only Will sweep um fast over that um, I think I used lots of uh Standard documentation you can find on the internet. It's nothing special about that I um because I'm not using the dhcp and dns server for the home lab, but Yes for all my Home environment. I turned on um dynamic dns So when you devices um automatically register themselves to the dns server So I don't have to maintain a list there um manually And for that I have done this Um, maybe you have seen my description in the presentations that I have on net here um, it's um um a subnet um Slash 24 based I have my ip from the router. We will see it a few times We have um the ip address of the raspberry pi running the dhcp dns and load balancer I have two domains. I have my home lab net domain where everything in my in my home is um registered registered to And then I have a sub domain C1 it's a c1 means a cluster one um home lab net where all my um kubernetes nodes my okadee nodes are running inside My I have a dhcp range And I use a static ip for the most important nodes um because I yeah because I try um Lots of installation strategies. I like to have fixed ip's for the most um nest for the most important mvms And for them I use static ip's for sure if I create dynamic nodes um through machine sets later Um, I can use the dhcp for them But uh, yeah, it's uh, I use a mixed scenario for that for them First you have to do the usual things. I use uh raspberry pi os for my recipe I update the package list. I give them a static ip Then I install isc dhcp server in my case. I use that for the dhcp server I set up um See the net port Figure I do the basic uh configuration here with this this file And the first section is for dynamic dns Um, yeah, it's not nothing special about that I am here this section is um Served by the dhcp server every time a new node Um requires an internet address your etc resolve conf file Will be filled by parts of this options here Here we have the definition of our subnet range of uh, sorry our dhcp range and here are the um See um static ip sections Where I use the mega address That is configured by terraform in vSphere To um serve the bms that are registrating Themself or i'm asking for ip addresses Um fixed ip addresses. I do that for the bootstrap master nodes worker nodes and yeah, that's Sets it for the dhcp server The next thing is setting up The dns server It's a little bit. Yeah more files are involved in this because I use bind for that I started this dns mask, but I was not convinced um With its features I so I throw it out very quickly and use bind Um, also for that you find lots of information in the internet and it's nothing special Um in the configuration about that um that I use We have a Access control list here where I say every ip from my subnet in the homelab Can access the dns server I have uh, yeah, I configure it also as a forwarder if a If a domain name is not And known to the dns server it will forward its request to Our I think it's google the google dns servers on the internet And here I turn off a few security um switches because I had problems with that and I Yeah, I don't have the energy to Find out how to make it really really secure But um, um, I will improve that the next time it's a side task I gave myself Here I define my zones Um, I have a homelab net zone. I have the zone where okd will run its VMs um It's referencing a file where I Configure the records As um described in the official documentation And I have a reserve a reverse zone setup Because it's uh, yeah, it's best practice You you don't really need it for a homelab, but um, I'm using it Because I I wanted to try out how to set this up um, here we have a few yes, this is a See zone file for my homelab um In real life lots of entries are here. Um, because I use things other than okd Um that are using this DNS server And this is a setup for my reverse Lookup a reverse zone file So here is now is the interesting part because here we have the zone file for C1 cluster one homelab net And you will see Yeah, lots of records here This are the records that are required by okd to work. We have here a wild card um c name That's everything here is going to the load balancer. This is the internal api And jamie talked about we have an external api Record here. We have the workers to master the bootstrap node and the load balancer node sets pretty much Is it The next section Is um, it's a load balancer. It's a third and last component. I have set up on my recipe um Ha proxy is a load balancer. I like it. I like it much because it Is rather easy to set up. It's fast and Yeah, it's it's don't get confused by this section. It's um pretty much default You have a dashboard where you can see um, which which Backend nodes are available or responding. Um, and Which are not We have here the load balancer for say api Here we have a load balancer for the ignition Um configuration file server And because maybe you remember It's the first step is that the bootstrap node will serve the ignition files Um afterwards also some master files will serve them And if they serve the ignition files the bootstrap mode their bootstrap node will stop serving the ignition files And the switch over is controlled by the load balancer And here we have the routes Say port 80 for htdp um local lancers and it's the same for htdps I have um provided all my nodes to see local lancers because um, I like to move the Um open shifter or okd router around between the nodes to test things That's why I not only have the workers in the list, but also the masters If you reboot your node, you should check if all system d services are still running or if there are Errors thrown out you can use val lock and suslog To find to troubleshoot if something goes wrong But my experience is that if you follow this guide here It's uh, not so much as it looks like um Then normally it should work a rather fast And then in the end you have external components um load balancer dns server dhcp server um It's a setup. I think it's rather common in in lots of companies. I could imagine That's why I use this and not say Yeah easier to set up ipi installation method and Because I want to try out things That I have also available in my company And that's pretty much Everything I can tell you about that Thank you Diane I will Switch over to you perfect and Thank you for that. We were when we were discussing how to run today One of the things that we we figured was generic to everybody's deployment was figuring out these bits and So joseph. Thank you very much for taking the time to walk us through your experience with that and we'd love to hear everybody else's Experiences and we'll probably rinse and repeat that A little bit during some of the sessions here We have a bit of time right now and I think what I would like to do is to take a few minutes To see if anyone's got questions in the background In the chat So if you have a question for any of the speakers and if any of the people who have spoken so far And others want to jump in and to the back stage again That would be charo. I think and others Um That would be great if you could do that now I'm we're all as I keep saying learning this new system. So Hopefully you don't have to log back out log back in to get the backstage but charo if you can join us again and I don't think christian has um has managed to to join and there but um If you have questions, um, that would be great. Um in the interim Um vadim, I know you showed off Charo's wonderful issue with the good documentation and the right Comments and tags and everything maybe if you could well just take one more moment and um walk through that and and talk a little bit about why that's a good issue and And then we'll wait and see if people have some questions in the chat um um, so The core of the problems is that there are different assumptions on what people try to deploy and Versus what they expect to happen. There are lots of errors in the way There are simple errors, which we can prevent. There are a lot of things from internal infrastructure, which we have no aware of So we've hit this problem during three x days Where we ask people to show some random pieces of logs um Show the package versions of this and package versions of that that took quite a while in the end, uh curing for Okay d4 and ocp4 design The goal of collecting necessary information has been Right on the day one So we came up with two different tools to have that provided to us For any issue One of them is log bundle collected by installer that basically gives us All the logs from bootstrap nodes if masters are available. We're also fetch the logs Since all of that is centered around Kubernetes meaning we can collect a lot of information from installer using Kubernetes primitives like installer version is stored in the config map So if it gets stored in the log bundle, I don't have to request a person Uh to ask which particular binary have they been running. It's been recorded already um That prevents us from a bunch of issues like I think I ran 47 installer, but in fact I type out and accidentally run the 4.5 installer and so on uh That can happen that No one is to blame but what we want is actual auditing of all the events So the chargers issue already had a log bundle which gives us a lot of information like which version are you installing Which installer has been used for that? What infrastructure is it running have masters joined already? Let's say there's a lot of time from asking a person directly and finding out the truth here um So we've created a template in our issues where That is being recorded Of course some missions don't require a log bundle. They're clear as a day for instance. Um OCD clusters don't have a branding setup That's pretty easy for us to fix and you don't have to provide it but if you would That would be very nice, of course um We're also working on We also work that log bundle and oc-atom must gather archives to not contain any sensitive or private information Uh, it still might slip in so if you could review that file and say that Accidentally some secret with my uh password to this fear is being logged. That's another bar, of course but um if you If the issue has a log bundle that gives us a lot of information so we can get start working without Spending a lot of time and figure in different details, of course Um, yeah, so that's basically that awesome. Um, and I'm testing using a poll here right now And I may have blown it already by showing the results before I actually ask the question So I don't know can can one of you try and Test that just can you still vote in that poll or do I need to recreate it again? No, I think the poll still the poll is in the event not the stage Okay. Okay. Here it comes somebody's testing it. So it's still live there. Perfect. So that that's great There was one question from jesper in the chat And I'm not sure which one of you would like to try an attempt to answer that but it was um You didn't quite get the note on ipi and what is the state of okd ipi v-sphere installer? Is it fully supported? Um There's some and the docs are a bit sparse scarce On the subject as far as you can see and that's why we're here today too is docs is what we're all about right now So, um, who would like to take that one on that im Charo? game So okd as a community distribution is aimed to be Providing all the install methods all the functionality of the ucp platform a built-in type of community Um, thanks meaning You would get all the install methods. There is a discussion about assistant installer and so on. So that's definitely on the table Uh, however, there are bugs for instance azure installer is in a very poor shape mostly because Right ahead there's a company can communicate with microsoft as a company and add arcos to the marketplace and You would be able to start a new arcos image uh from scratch that doesn't take time but Uh, okd is based on top of fedora chris meaning fedora is a community as to talk to microsoft as a company And convince them that fedora chris is a viable distribution. It needs to be landed on the marketplace. That's the core part of the installer Other parts are absolutely there. So what users have to do right now is to upload it manually Start a temporary vm so that they could use The temporary image they have uploaded and run the azure installer there from from This point everything goes just like it was started in Gcp or aws or anywhere else where we already have fedora chris images uploaded but This initial blocker where we don't have fedora chris uploaded to azure marketplace Is really stopping us now And when it comes to vSphere as you can see vSphere is incredibly popular here and we've already added um a vSphere test For every single nightly It This limited it it may seem totally broken down But what we see is the limitation of our ci which we will be fixing with info guys And it hasn't covered another block where okd in particular Unlike ocp is using a lot of images from uh docker hop And eventually trade limits us and some tests fail which ensure that samples operator For our center s7 images Is failing So while we fix that it may seem entirely broken, but vSphere api is one of the biggest goal on our ci and Every sign we propose a change to okd one of the first questions we ask will it break a bunch of users who are using vSphere api When comes to upi We have a test for that, but again Upi is always Different for everyone as is designed because user has to come with their infrastructure and what we have In our ci test might be entirely different from what what you might have in your home labs or company setup So while the test for the upi would show that things work doesn't mean that they would work necessarily in info That's the That's how upi works So we are not very rushing with adding upi test because While ci might show everything's perfect. It doesn't mean that users are actually succeeding On that front. We're relying on direct community Feedback meaning you broke static apis for instance and so on so we collect logs try to figure out Is it the info problem or is it? the okd problem We had a question from mike mcune a little bit earlier earlier in the presentation on the on the stage chat that I wanted to get to Mike asked jamie the dns records you reference are those Just a records or do you add the pointer records as well? So this is interesting and actually brings up an interesting topic Oh, that's been particularly relevant the past couple months. So in my vSphere upi setup. I am using uh Both forward and reverse records in my dns configuration The reason for that was because I was utilizing the reverse lookups of the nodes os To get the host name and the fully qualified the fully qualified host name in What was that end of november? beginning of december An issue was introduced upstream from fedora where the precedents of methods for for determining the host name was rearranged and What I relied on reverse dns actually was pushed To the back and a A Mechanism further up front was to just name the node fedora So you ended up with a bunch of nodes all just host named fedora There's a workaround for that right now and there's also a script in my repo to go And launch on to the nodes and fix their host name But yeah, I I still do rely on reverse And I think that's something that should be supported and there's it's working its way through the system to address the issue that that was Introduced so and vedeem might also have some more info on that as well Yeah, it basically builds down to network manager bug the problem here is that We okd community have our own goals to deploy a fully functioning cluster, but fedora chris is a broader scope meaning If they break us We still have a voice, but our voice is not the only one. We have to Carefully clarify Why this is important for all fedora chris users? In mcp situations entirely Different it's much simpler everything our cost is signed solely for For ocp if ocp is broken meaning our cost folks have to say yes, sir and we'll fix it In community Things are much more complex So it might take some more time, but I think we had a great experience with network manager in particular We've reported a bunch of bugs for them. They are very responsive and um, we can Skip a few layers here and go straight to fedora bugzilla and ask them because they seem to be quite familiar with what fedora chris is what okd use cases are and the amount of testing we provide Is very helpful for them as well. I have to unmute myself There's a bruce and joseph are having a little back and forth here about whether or not joseph is used h.a proxies To use ready z endpoints for checks I'm wondering if joseph if you want to pop back into the the panel and talk up there. We go. I'll add you in Here you go And bruce if you want to join too Since we're not sharing screens we can have more people in That was a short answer Um, the the other secret reason I wanted that you're also having a side discussion there that I can see around azure with john fortin so maybe Bruce if you want to join too, I'll add you into the background. I'm fortunate. I can't add John fortin But if you joseph if you want to reiterate a little bit about um, the what you found with running on azure Um It works. Um, you don't have to change anything in the installer. No no tricks um, you just need a fast internet connection because um Because fedora core s the base image is not available on the azure marketplace You have to bring it To azure and this means that the installer has to download it first locally where you run your installer Afterwards upload it Expand it first to I don't remember. I think eight gigabyte And uploads it, um has to upload it again to azure and this takes More than a half hour on my side at least I upgraded my internet connection, but um Normally the way to go is to run the installer in an azure cloud shell because the The speed the internet speed if you are in azure is uh much Much higher than um, you normally have at home And there I always get a 100 percent A success rate to install. Um, ok d on azure. It works pretty well But yeah, it's the only only change to a normal installation is that you have Um to downloads if it or a chorus image. I don't understand I really don't understand why that is So, um, but yeah, I accepted it and uh That's also a reason why we use ocp or azure redhead open shift for azure instead of ok d because uh, this Seems not to be released in the short term All right, cool. And I we're having a little technical difficulty getting uh bruce into the back Here to ask his questions too. So we'll to answer his his questions there All right, folks, um, I I did run a poll which I think is you know, always fun to test out the functionality here and the results of how people are using ok d and um It's pretty much 50 50 uh with production and home lab use which I think is interesting Because um, I love all you folks who are using in production John and jamie and i'm bruce who's coming in the back and there's joseph. Yeah, you guys really are We love you And then the home lab folks are very interesting to me too because they give us some really nice Feedback and help on board and train people up. So that's great. And nobody's experimenting at their company Which is what I would be doing with ok d if you you know once I've seen all the issues and everything going on I think I would be reserving it for some experimentation and somebody is um using it for something else So if you're in the chat, whoever said other What are you using it for? See that I don't see an answer to that and I can't see who And just to clarify for what i'm using it for i'm actually using it for uh um doing um A development platform for folks for developers at the university of michigan to Get familiar with open shift and test their applications and deploy their applications And then i'm also doing builds testing the build process So I have a separate okd cluster that's just getting rebuilt all the time To do tests of installations and various things Uh So Selatin um, uh, just nodded in he must have been the other or that person must have been the other person Um, and um, that was his or her answer and not in production, but dev test environment, which is kind of um To me that that's the experimentation Phrasing that that I was thinking of that it would be really a great place Um to do your um your dev and your testing um, so that's That's awesome Um, and and we do know um, you know, there are quite a few people who are using it in production. I'm always surprised Maybe that's just because I'm so risk averse myself But um, it is and it is pretty much I've seen a lot of use like jamie's describing um and bruce if he manages to get in At uh dot edus as well. So I saw there were a few other folks Here, I think there's someone here from one of the hong kong universities and other places, but that's also um, a nice way to get people trained up on using containers and understanding cloud native technologies too. So there's been a lot of Dot edus that have been Consistently in the community for the past. I've you know, I've been working on open shift when it was origin And now renamed okd um And that's really consistently been um a good testing ground And training ground for people um in the dot edu space So um, that's that's wonderful Um, and and it has been a wonderful collaboration with the fedora community And um, I can see a few things in the chat where we've you know tended in the past Maybe to blame fedora kernels for being buggy when it was something on the open shift side Um, and I think that collaboration that goes back and forth between the open shift engineers the community Resources in okd and the fedora folks has really been amazing and very healthy. So um, we're really Kind of excited about that So carl is asking now, what would you use for production orchestration? If not okd What do you mean with production orchestration? Do you mean for containers? I'd use an ocp Cluster with a subscription To to red hat, but I might be a little bit biased Yeah, um, yeah, it's it's interesting because um open shift is as um, charo said in the um In the in the intro on what is open okd It really is kubernetes plus plus. There's a lot more in it than just um di y kubernetes and so It's sort of what we're seeing is a lot I like to think of kubernetes di y as the pipeline for people who want to use okd Right, so you go and you deploy kubernetes, and then you realize You don't want to manage that. Um, and the operators, um concepts Um, it really have have changed the game. I think in deploying and installing and managing This container orchestration. So um and vadeem has some Set personal opinions too. So once you um, I think a lot depends on what What purpose are you filling in with okd and what's your use case basically we I don't work sales, but for what sales told me they always start is what what goal are you about to to complete It doesn't mean we would start selling you ensemble or open stack or whatever Matters is that you would succeed because that ensures that the Collaboration would be a long term. It's one and the same for our community safe You would not use okd That's entirely fine But we're interested in which calls would you like to have completed Because okd might fit in there or Another tool might be useful. For instance, I run a plain floracro as Image because it already has podman And if I don't need to change it a lot I could just Run a couple of containers even there. It would be automatically updated and you probably don't need a complex container infrastructure and management for that um, so We would need more details on why not okd of course, but in general obviously you might want to pick ocp because it has support is basically the only difference between okd and ocp Yeah in all seriousness at my I've only been this red hat since uh, august This past year So so at my previous employer, we actually did run Okd clusters a lot of okd clusters in our lab environment And and I used it um in one way very similar to how jamie's Using it at the university of michigan It was a platform to teach developers in a safe environment that that they could destroy the entire ecosystem and Nobody got hurt. Uh, it was also a way for us to Without having to impact our red hat subscriptions It was a way for us to try out new releases of open shift before we updated any of our Productions by and we ran ocp in production And and having the the confidence that okd is built from the same code base as ocp Allowed us to do that experimentation Yeah, and I would echo that that's so actually at university of michigan We are doing um okd and ocp and also doing fedora coro was and I run fedora coro was Um single just as a single os Do some development work on that I would encourage folks to check out fedora coro us We've given it a little bit of discussion here, but it is basically an operating system For container usage and container development And it's going to be making some headways. I think even in research areas and edu and whatnot Because there's a lot of instances where People need an operating system To do research or whatever And have a lot of dependencies that work well with the container metaphor right that work well with a container environment So I would encourage folks to check out the fedora coro us Website and also there's a working group there. That's a really excellent working group. I participate a little bit if there's um Christian glomback is another person that's on that that participates Um, so definitely check out fedora coro us as well Be use ocd for the development. We build um software on it We run lots of applications um, so it's our Workhorse for everything And more and more applications Um from central it or our internal for internal customers or for external customers are running on On ocd currently, I think I don't know how much developers we already have on it It must be more than a hundred now And it's constantly growing each week I have onboarding calls and Uh, I don't I don't can talk about the applications, but it's a very interesting task Together with azure and all the added services you have there databases and so on and It's absolutely great experience. I like also that you have the same user experience um On-premises and on yeah other clouds. That's also a big added value for me Said I know the stack is absolutely the same everywhere um And just below the operating systems there are the differences you have azure You have vSphere Yeah, but everything Um going up the stack is the same That's great All right Well, and I just created another poll because that's what I get to do and um, so I'm just curious how many of you who are here today have actually joined the okd working group and by that I kind of mean the um The google group that gets you on the mailing list gets you all the announcements That's kind of Diane's theory of joining And just curious how many of you have this is really the first time you've um participated in an event So that's by the meetings. I mean the actual weekly bi-weekly cadence of okd working group meetings And um if you haven't um, why aren't you? That's really the question And how did you find out about this event if you're not on the mailing list that would be a curious thing too for me as the person trying to Corral all of you into participating and make sure you all have all the information you have and need So I am thinking That will take 15 minutes before the next session Starts and um everybody who's got a session that I can see Andrew and shree and other people there You can join your sessions five minutes before The session starts so the moderators can jump into that session and add and people can Join the sessions that they want We'll we'll see how populated some of them are if everybody's in the v-sphere one Then we will um still motor through each of the other ones because then we will try and get those sessions recorded So those um just like we did way back with the okd marathon the youtube watching of these things was really um key to getting more people on boarded so I am going to leave that poll running if everybody is okay with that and um See if anyone has any more questions before we take this um bio break um And cool I don't know if people know that um a single note cluster of okd open shift is coming uh in the next In the next weeks I yeah, I I think no in the next one. Yeah 12 weeks or also If you can count it then it's a few um And I think it's also a game changer for okd for especially for home labs to try it out and scale out Uh a full cluster if if that's possible. I don't know That reminds me of another important difference between ocb We as a community decide When to release and what to release basically We may go to uh the feature Yosef has mentioned is bootstrap in place meaning when you create a single note cluster You can reuse one single machine to to be a bootstrap and then it boots into master so um Its feature is available in for eight and we already have okd for eight nightly so you can try them out right now but When it becomes stable It's up to us to decide because we might have entirely different requirements like a huge focus on this sphere Or a upi and so on and if we decide that we want this feature sooner before Uh braille cross ocp for eight becomes g we can do that absolutely and ocp folks will be super happy about this If we want to delay this because of the known issues and the one we can delay this as long as we want But you can give it a try already now and to report us back. We would be very very interested in this Yeah, what i'm going to show you guys that that attend the the single note workshop today is the The old-fashioned way to build a single note cluster I mean in for eight we won't need the bootstrap note anymore and that will be so that uh sella ten um has one more question for um vadim in the chat um Joseph might have answered it, but um does some of the cluster operators still on Uh, is is some of the the cluster operators still on docker hub? He read someplace But after docker hub limitations all okd images are moved to quay dot i o car Partially correct All the operators images are ready to play and we push our images there The race however one component which is samples operator It still fetches images from docker i o because another i'd had project called Uh software collections Is publishing that to docker hub as well So what we'll do is we'll connect to them ask them to to quay i o because Okie clusters are using those images and get hit by rate limiting And ask them to push Well, effectively we'll be consuming just the quay i o images or for many other stores where people are not treated limited because That's a huge issue for our home labs. In fact And that breaks it down our vCR test. So Uh, that's in progress I don't think we have a tracking ticket for that, but it would be useful to have Cool, and then patrick is asking what are the minimum specs for a single node? cluster and charo um is going to be showing that in the single node cluster session and talk about that too So if you want to add that and then Yeah, and so my my goal for today Folks that the sessions can I put the sessions to be very long So if you don't end up taking the whole thing, I'll I'll be online for everybody But and we can stop the broadcast when the last session completes. It's and runs peters out shall we say So that's my goal and My other goal is too is for everyone who's listening Who's a participant or an attendee? If there's something missing in the docs, um, you might Jamie, um, whomever if one somebody could throw the docs that are in mike's thing into the chat again So while we take this break before we get started again um Take a look at that. That's where um, I would love everybody my my fantasy island This is where I live on his fantasy island Is if we could get a couple of folks who are not in the working group or haven't Made and logged an issue or done a pull request against anything to Show surface themselves today and look at that documentation even if it's a grammar mistake we've made Or you know additional stub for another deployment target that we haven't done if if you could Take the time take a look at that while we take the break and Start, um, you know, see and if you could could help us out with these documentation Issues that we have we know we have them We are not perfect And uh, though and we do think you're all psychic and you know what we're talking about So um, if we are not explicit enough or you need more detail, let us know And we will be taking these and cleaning them up And then moving them to a proper location in the okd Official repo once we've gotten to a a good point Um There and so we'll still be referencing them probably in a few blog posts on okd.io for the next little while But um, the goal is to Get some of you to take a look at that see what we're missing Make a pull request put an issue in fix our fix our docs for us help us help you and um We'd love to love to know more about where you're what your use cases are for okd where you're deploying it What issues you're running into? really, um This is this is your your your team here and yes, joseph. Thank you for the plug for the blog We just got that added. So there's you'll see my one blog about this event there and um, if you have um, deployment or a tip or a trick or something that you want to blog about in that repo for okd.io there. Um, there is instructions on how to add that They're also um, it's not quite yet instructions, but if you are using okd in production or um in at an edu site or on your home lab And you would like to to list your organization as a participant in okd We're going to be adding that into the the notes and maybe that's what i'll do this afternoon while you all are doing this is add add not just how to Do a blog post but to how to add yourself to the the little yaml file that i'm creating there too. So Because we would love to know who you are and where you're using it. Um, and um start growing the community and yes One real quick, um question clarification for for vaudean. I know there's been some there's some chatter in the chat about multi-node single node clusters It is still possible to add workers to a single node control plane, right? Yeah, so so when we say going from a single node to multi-node We're really talking about the control plane to go from which you need if you want high availability I would highly recommend your control plane being more than one node But if if you start with a single node cluster and you want to make your home lab bigger You can add worker nodes to it and if we get bored and run out of time in the single node workshop We might actually try that Done it in a long long time. So yeah, that's what we like to hear doing stuff blow it up. Yeah So with that i'm going to um, let everybody who's on the main stage pause and um, we will be back And um, I will I will hang out here for a little bit and so people don't freak out and think we've dumped them um, but i'm going to go grab a glass of water and um, I hope all you take your bio breaks and we will be back in 15 minutes and for session folks Five minutes before you can join your session and as we sort of jamie and I will be in the back trying to help the session moderators Make sure they're set up correctly and um And vadim and charo I think i've empowered you to join maybe not charo But vadim for sure and myself and jamie we can pop back and forth into other things and um And help out as needed and if you're really messing up and you lost yourselves Go over into the reception Area in in hop in and I will try and keep an eye on that so Good luck. Have a bio break and we will be back in 15 minutes Thanks guys