 So I'm going to let Joseph talk about some DNS, DCHP issues and best practices. So, Joseph, you want to try unmuting yourself and sharing your screen? Yep. Hello, I'm Joseph Meyer. I'm from Rode and Schwartz. It's a German, Munich-based company. I'm working with OCD since more than three years now. I always started with OCD 310 and moved the road over OCD 311 to OCD 4. Currently we are there. OCD helped us a lot in getting in touch with Kubernetes and gaining the skills for that because vanilla Kubernetes is not an easy thing and we used OCD to learn that all and now we are in a stage where we move parts of our Kubernetes clusters to OpenShift for having more production loads and getting the support from Red Hat. Now I try to show you a little bit about my first or what I thought are the heaviest steps in the beginning if you start with user provisioned infrastructure with DNS, DHCP and external load balancer. Can you see my screen share? I hope so. Yep. Thank you. This is a diagram of my home lab. I'm running here at home. I'm using VMware vSphere. I bought a license that's very suitable for home lab users. It costs around 150 bucks. It's called the VMware User Group Advantage Edition. You have to pay 150 euros for a one-year license. I think that's pretty affordable for home labs. Why do I use vSphere at home? Because I like to have an environment here in my home lab that's similar to the one I use in my company. That's why I'm using the VMware vSphere. It's running on a Ryzen PC. It's a very capable one. It has 16 physical cores and 32 with multi-threading enabled. You don't need that much cores. Don't be frightened of that. But I like to have possibilities to add more workers, to play with new things. One thing I tried at home is OpenShift virtualization. I'm based on Kupert. This requires a little bit more horsepower than you normally have on your desktop or on your laptop. We have one PC for VMware vSphere. I have another computer running a network-attached storage. I'm using Truna's Corsi Community Edition. This year, Truna's scale will go into general availability. I like that because this one will have a small Kubernetes cluster running on it where you can deploy your home charts. I like to have some components outside of my OCD cluster because I'm constantly deleting and creating clusters to test new things. I like to have the possibility to have some components outside of my cluster. On the top of the image you see my DNS DHCP server and also the load balancer. The component is running on a Raspberry Pi. Maybe you ask yourself why I'm not running on helper VM in my vSphere environment. I'm doing that because I'm also constantly deconstructing my vSphere environment to test things. I'm also using the DNS and DHCP server for my home environment, not only for my home lab. I need something that's running all the time and the Raspberry is pretty fine for that. I have a DSL modem router that's connected to the internet. The first thing you should know if you want to set up a DNS and DHCP server at home, you should be sure that no other of the servers is running in your subnet. In my case I had to turn off the DHCP server and the DNS server running in my DSL router. I have to say in between because for sure during the installation you need internet access, you need DHCP, DNS server if things go wrong. But if the custom built servers are running you don't need the servers in the fruits box anymore. What do you have to achieve here? On the VMware vSphere server there are a few VMs created during the installation process. Just for your information I use the instructions from the GitHub repository of OKD. It's located in github.com, OpenShift-OKD. There are some guides, also one for UPI vSphere. One of these guides uses Terraform. That's a tool that can take about infrastructure, uses a domain-specific language for that, and there is also Terraform provider available for vSphere. I have seven VMs, one bootstrap VM, three masters and three workers. You don't need so much VMs, no it's normally. This is my standard setup, I don't want to take care about with limited CPU and memory space. I want to go and have fun. That's because it's seven VMs. The first step in the installation is that the bootstrap VM starts creating a fake control plane. This takes only I think a few minutes. It depends on how fast your internet speed is. If you have a local registry in your home labs and things can be faster because of improved network speeds normally have in your home lab. The second step is that in addition to the bootstrap node you have your master nodes. The master nodes are constantly using the load balancer that's running on the Raspberry to get the ignition configuration files from the bootstrap node. If the bootstrap node is in a later stage of its installation it will provide with a local web server this ignition file to all the masters that are constantly calling for that. If they get this ignition files from the bootstrap VM they are provisioning themselves. They normally boot once, sorry, in minimum to boot into a new version of your operating system. A new fit or a chorus version because initially you start with a VM template that's stored in vSphere and beginning from this operating system version the VMs are running, waiting for the ignition file, get it, fetch a new or the OS version that's determined for a certain OKD release they are booting into the new OS version and join the fake control plane that's running on the bootstrap node. If the control plane is running then in the next step the bootstrap node stops serving an ignition file the load balancer will see that and turn off the bootstrap communication in this phase you could in theory delete your bootstrap VM because you don't need it anymore. Then the worker VMs they are running all the time and also fetching an ignition file for the workers from at this time the control plane that's running with our masters they are constantly calling for that and if the control plane is set up again a web server will serve the ignition file for the workers the workers will fetch the ignition files load the current version of Fedora Chorus, boot into it and finish the installation and afterwards you have a running OKD cluster so to achieve that set you have a load balancer and DNS DHCP server you have to set up a little bit in advance I created some documentation about this process don't get frightened it's lots of text I only will sweep fast over that I think I used lots of standard documentation you can find on the internet it's nothing special about that because I'm not using the DHCP and DNS server for the home lab but yes for all my home environment I turned on dynamic DNS so new devices automatically register themselves to the DNS server so I don't have to maintain a list there manually and for that I have done this maybe you have seen my description in the presentation that I have a net here it's a subnet slash 24 based I have my IP from the router we will see it a few times we have the IP address of the Raspberry Pi running the DHCP DNS and load balancer I have two domains I have my home lab net domain where everything in my home is registered to and then I have a subdomain C1 it's a C1 means a cluster one home lab net where all my Kubernetes nodes are running inside I have a DHCP range and I use static IPs for the most important nodes because I try lots of installation strategies I like to have fixed IPs for the most important VMs and for them I use static IPs for sure if I create dynamic nodes through machine sets later I can use a DHCP for them but I use a mixed scenario for that first you have to do the usual things I use Raspberry Pi OS for my recipe I update the package list I give them a static IP then I install ISC DHCP server in my case I use that for the DHCP server I set up the internet port I do the basic configuration here with this file and the first section is for dynamic DNS it's nothing special about that this section is served by the DHCP server every time a new node requires an internet address your ETC Resolve Conf file will be filled by parts of these options here here we have the definition of our DHCP range and here are the static IP sections where I use the MAC address that is configured by Terraform in vSphere to serve the VMs that are registering themselves or asking for fixed IP addresses I do that for the bootstrap master nodes worker nodes that's it for the DHCP server the next thing is setting up the DNS server more files are involved in this because I use bind for that I started with DNS mask but I was not convinced with its features so I threw it out very quickly and used bind also for that you find lots of information in the internet and it's nothing special in the configuration about that that I use we have an access control list here where I say every IP from my subnet in the homelab DNS server I have configured also as a forwarder if a domain name is not known to the DNS server it will forward its request to our I think it's Google DNS servers in the internet here I turn off a few security switches because I had problems with that I don't have the energy to find out how to make it really secure but I will improve that the next time it's a side task I gave myself here I define my zones I have a homelab node zone I have the zone where OKD will run its VMs while where I configure the records as described in the official documentation and I have a reserve a reverse zone setup because it's best practice you don't really need it for a homelab but I'm using it because I wanted to try out how to set this up here we have a few this is the zone file for my homelab in real life lots of entries are here because I use things other than OKD that are using this DNS server and this is a setup for my reverse lookup a reverse zone file interesting part because here we have the zone file for C1 cluster 1 homelab net lots of records here these are the records that are required by OKD to work we have here a wildcard everything here is going to the load balancer this is the internal API we have an external API record here we have the master bootstrap node and the load balancer node set pretty much is it the next section is the load balancer it's the third and last component I have set up on my recipe HA proxy is the load balancer I like it much because it is rather easy to set up it's fast and don't get confused by this section it's pretty much default you have a dashboard where you can see which backend nodes are available or responding we have here the load balancer for the API here we have a load balancer for the ignition configuration file server because maybe you remember the first step is that the bootstrap node will serve the ignition files afterwards also the master files will serve them and if they serve the ignition files the bootstrap node will stop serving the ignition files and the switch over is controlled by the load balancer here we have the routes the port 80 for HTTP load balancers and it's the same for HTTPS I have provided all my nodes to the load balancer because I like to move the OpenShifter or OKD router around between the nodes to test things that's why I not only have the workers in the list but also the masters if you reboot your node you should check if all system deservices are still running or if there are errors thrown out you can use val lock syslog to find, to troubleshoot if something goes wrong but my experience is that if you follow this guide here it's not so much as it looks like then normally it should work rather fast and then in the end you have external components load balancer, DNS server, DHCP server it's a setup I think it's rather common in lots of companies I could imagine that's why I use this it's easier to setup IPI installation method because I want to try out things that I have also available in my company and that's pretty much everything I can tell you about that thank you