 All right, so let's start. Welcome, everyone. Welcome to Cloud Native Essentials, a one-on-one tutorial to you start your Cloud Native journey. First off, a little bit about us. I'll start by introducing myself. My name is Amon Baumann. I am a Solutions Architect at Red Hat, formerly with work with Ray at SUSE, and was an architect at Rancher for a period of time. So excited to be up here and help deliver this talk. And I'm Ray Lohano. I am a Cloud Native solution architect at SUSE by way of Rancher Labs. I'm also a senior staff ambassador as well. I also, it was a Kubernetes release lead. And I'm a co-chair for Kubernetes SIG docs and a subproject lead for Kubernetes SIG security. So I do a few things in the community. So when I first started, I had a hard time, like, where to start in my Cloud Native journey. I'm sure everyone here has seen the Cloud Native landscape. And it is mind-boggling. And it is hard to determine where is a good place to start. So even with specific sections like a tainter runtime, there is a lot of projects for container runtimes. How do you know which one to choose? Same thing with scheduling and orchestration. Do we use cross-plane? Do we use Kubernetes? Do we use KID? Like, there's so many options on what to choose. So where do we start? Like, where do we go from which project to this project to another project? So I took the inspiration from a sextant. I had one in my garage growing up. My dad was a merchant marine. And a sextant takes two points of reference to find latitude. We took inspiration from a sextant. And we are taking two points of reference to determine our Cloud Native journey. The first one being graduated projects with the CNCF. And there's a filter in the Cloud Native landscape to find graduated projects. But it is still hard. There's still multiple choices. So where do we go from there? There's 24 CNCF graduated projects. The first one being Kubernetes. The ones highlighted here is what we are going to get hands on with at the hands on portion of this tutorial. So we will talk for about half an hour or so. Then we'll get hands on. So the first one was Kubernetes. And the last, the latest one was Celium. So these are the projects that we'll go through. And we will briefly talk about in the next few slides here. So the second point of reference we need to determine our Cloud Native journey is this Cloud Native trail map. So this was published in 2018 from the CNCF. This was published to help organizations and guide organizations on their own Cloud Native journey. But it's still at the enterprise level or the organization level. Still difficult to manage on a personal level if you're learning about Cloud Native. We'll go through not every single one. And we won't go through the exact order of the Cloud Native trail map. But we will pretty much go through most of them, starting off with containerization. So with containerization, of course, Docker popularized containers. Docker also donated container D to the CNCF as well. But before we talk about containerization, I do want to bring up the Open Container Initiative, which came about in about 2015. The Open Container Initiative, or the OCI, standardized lots of things with containers. It's standardized the container runtime spec, the container image spec, and lastly in 2020, the distribution. And that means if you have an OCI-compliant container image, it can run on any OCI-compliant container runtime. So if you have created a container image on Podman or on Docker, you can run it on container D or you can run it on Podman or Cryo. And thank you to Docker for donating many things to the OCI, like its container format, the runtime, and donated many things to the Open Container Initiative. So why do we containerize? Because it solves lots of problems. And this is just a few of them. One is it just works on my machine, but it doesn't work on the server. It also solves utilization of the server as well. So with containerization, we take the application source code on the left here, and we take the application dependencies on the right. And we build a layered container image, starting off with the base layer. This container image is read-only. So since it's read-only, so therefore it is immutable. So whenever we execute a container, it gives us that same execution state every time. So next, we're going to go to container registry and runtime. Starting off with container, so whenever we have a container, we use a container runtime like container D. Under the covers of Docker is actually container D and run C. To take this container image that is immutable, which is stacks of read-only layers, then it will add on a read-write layer on top of that. So once we have our container image, we need a place to host container images. We do that with a registry. So we take the container image that we built on our local machine, or also from a pipeline as well, and we push that container image to a container registry. From that container registry, we could then use our container images that we could pull from and have running instances of that container. For the container registry, Harbor is a CNCF graduate project that we will use during the hands-on portion of this tutorial. So the next stop on our cloud-native journey after container registry and after container runtime is, of course, orchestration. So with container orchestration, we need an orchestrator, in case you're running multiple containers, that gets very complicated. It is easy to know where, if you're running very few containers, it's easy to know where they're running, what they're running, what ports they expose. So we have complexities if you're running multiple containers and on lots of many different machines. So how do we solve this? How do we solve automation? How do we solve orchestration? How do we solve security? How do we solve networking? And we do that with the Kubernetes project, which of course is the first CNCF graduated project. Under the covers of Kubernetes and each worker node in the cluster is a node agent called the KubeLit. That KubeLit gets the container runtime running on that worker node to start containers on that worker node. Next after orchestration on our cloud-native journey, we're gonna go into networking, because in Kubernetes networking, the Kubernetes network model is not implemented until you have a networking plugin. The Kubernetes network model says that containers would then put in a pod, talk to each other, actually share the same network name space. That pod itself gets a virtual IP address as well. And that pod on any worker node can communicate to any other pod on any other worker node. Same with services can route to pods as well. And services is a way to abstract a group of pods so it gives a DNS identity and a virtual IP for a service. Some networking plugins as well will have network policies, so you could actually control ingress and egress, those traffic into and out of the pods. So for networking, we're going to take a look at Cilium and install Cilium. Cilium is the latest CNCF graduate of project. That was just, I think last month or two months ago. So with Cilium, it does have enhanced security functionality and features and observability as well. So for our, after networking, once you have networking as CNI plugin installed in Kubernetes, that's actually when your cluster is actually at a ready state. So next we'll take a look at application definition. So we're not taking a look at Kubernetes again, we're taking a look at Helm in this case. So with applications, applications are not as simple as running a single container. Applications have other requirements. You might need to scale your application. So you might create a resource like a deployment, which sets the number of identical replicas of a pod. You might have to expose your application through a service and an ingress, so you get an URL for your application. There's also, and if your application is, let's say, stateful and persistent, you might use a staple set that has ordered and sticky identity for the pod and your containers in the pod to persistent volumes within the cluster. So there's a lot of complexity with just deploying an application. So the Helm project, which is also a CNSARP graduate project, gives that ability to pretty much prove all these dependencies all into what's called a Helm chart. So we'll use Helm in our tutorial as well to deploy monitoring and other applications to our Kubernetes cluster, all using a Helm chart. Next, after our application definition, we're talking about, next we'll look at some day two tools into our Kubernetes cluster. We'll start off with observability as well and logs. So we'll move on into observability, which there are three columns of observability, metrics, logs, and traces. Some people have added events as well, so some people have referred this as melt for metrics, events, logs, and traces. But we'll first take a look at metrics. So metrics during our tutorial, during our hands-on portion, we will deploy Prometheus. And Prometheus is a monitoring toolkit that will store metrics as time series data and will do a few or a single promql execution as well. With Prometheus, it does use its own querying language to actually extract that data. And so next, after metrics, we'll look into observability with logs. So logs with containers are typically out in standard out or in standard error. So when Kubernetes, you typically get your logs by writing a key control logs and the namespace and the pod name. But there's some complexities with that as well. Let's say if your application is running multiple pods on how to get those logs. So you could also use labels as well. You could get key control logs in the namespace and dash L to filter out pods with the label app equals nginx in this case. But in the Kubernetes documentation, I actually recommend a logging backend and a logging agent to send to that logging backend. So we're actually gonna take a look and use Fluent D. And Fluent D is a CNCF graduated project as well and it's used for unified data and log collection as well. So lastly, with our cloud native tutorial here, we're gonna look at policy. And for policy, we're going to look at the open policy agent. But first off, reason why we need policy because you can't have any kind of resource deployed in your cluster. You can't have cluster roles that could do everything. You can't have pods in the default namespace or you can't have pods with containers with privileged escalation as well. Or no labels on pods. So how do we enforce policies and guide rails in our Kubernetes cluster? Otherwise, you'll have the Wild West in your Kubernetes cluster. So we don't want that. We want to have certain policies that we want to enforce in our Kubernetes cluster. So we'll use the open policy agent specifically gatekeeper in our tutorial. And OPA is an admission controller so you can validate or mutate requests as well. So when a request to create a resource hits before it actually goes, is persistent in the cluster, open policy agent will actually intercept that. It'll validate it against the policies that you have installed. And it will check to see if it meets those policies. If it does, then it will create that object. So now time for the hands-on portion of this tutorial. We're all going to, it's browser-based. So Hobby Farm is an open source project was created by Eamon. So we are going to take a few minutes first to log into the open source project. To log into lab.cloud-native-essentials.com. I do want to make a note of a few things. The slides in the lab are actually, are uploaded on GitHub with the organization Cloud Data Essentials, repo KubeCon 2023. If there's any issues with the lab, we do have a YouTube playlist of the lab itself as well. And we have the steps, like I mentioned, in the GitHub repo. So I'm going to go into Hobby Farm. We're going to walk you through it. So if you have a computer, hopefully you'll go to lab.cloud-native-essentials.com. You want to log in, register with your email or password. And the access code is KubeCon23. So I'll keep this up for a little bit. So once you create your credential, it's going to have you refresh. And log in. So I'm going to log in here. And most, and you should see a scenario here. If you don't, just hit refresh or check the access code. So you want to go on the upper right to check the access code. Click on the drop down menu and go to manage access codes. And you can add that access code or make sure there's no typos. I'm going to give a few minutes here. I'm going to, for those who are here, if you're ready at Start Scenario, what this scenario is going to do, you are going to, it's going to spin up two virtual machines per person on AWS. Each one virtual machine will be for Harbor and for Docker. The second virtual machine will be for container D, Kubernetes, and all the other tooling, that all the other scene step graduate projects like Cilium, Prometheus, and Open. Okay. So I'm going to click on Start Scenario. We're going to wait a few minutes here for our instances to come up and we'll wait for that. The green check mark once the status is up and running. So it might take a few minutes, so bear with us. If it's spinning for you, just refresh the page and hit continue here. It's a few minutes here while the instances are coming up. Yeah. We had provisioned what we thought was enough capacity for this, but I think we're getting the hug of death. I think there's quite a few more people in the room than we had originally expected. So appreciate your time and patience with that. While we're waiting for those VMs to come up and get provisioned for everybody, little background on what we're doing here. This is the HobbyFirm platform originally that we built at SUSE, now it's a fully open source product, has been for a while. And the idea behind it is, it's a cloud native learning platform. So you can spin up virtual machines in a provider of your choice, AWS. We've got providers for digital ocean, Hetzner, things like that. And inside HobbyFarm, we can create scenarios, courses, multi-day training events that are gonna be kind of similar if you're familiar with something like Catacota, for example, similar concept to that where you've got content on the left, you can walk through the steps, you can execute commands in your virtual machines and go on and learn through all this content. So we've been working on that for a while. It's out on GitHub. If you guys wanna check it out, github.com slash HobbyFarm. We got a few of the developers in the room, so great to have them here. This will just take a second. As you see, Ray's virtual machines are populating here. So we are kind of getting the hug of death, but once everybody's up and running, we'll be able to continue. Now let's go over a high-level overview of the steps. So once we get into the HobbyFarm, we could click right through it. The first part, you'll get two virtual machines. One is called Harbor, one is called Kubernetes. And we'll start off in the Harbor instance where you will install Docker. Well, there's several ways to install all of these CNCF projects. There's multiple ways. So you can install Harbor on top of Kubernetes. In this case, we're actually gonna have a Harbor and use the Harbor installer to install. The one of the requirements is Docker since it does use Docker compose. So we'll install Docker. We'll go through the basics of creating a container. So we'll create a simple Golang container as well make sure it runs. Then we'll install Harbor itself with after installing Docker compose. After with Harbor up, then we'll push our newly created container image. Then we'll also pull it as well to make sure we can pull from that registry. Then the next few steps are on the Kubernetes virtual machine which we will install container D. We're gonna use container D as the container runtime interface container to run the Kubernetes will use to run containers. And with container D, there's several prereqs. So you we will install several kernel modules like overlay and VR net filter. We'll also install run C as well with container D too. And then once we get container D up and running, we will install the Kubernetes prereqs as well. We won't use packages to install Kubernetes. We're going to to install Kubernetes with Qube ADM. So we will download Qube ADM, Qt Control, Qubelet, download the system D unit files as well for Kubernetes. Then once we get Kubernetes up and running, it's just a single node cluster. We'll also check to see if the cluster is ready. There's more requirements with a Kubernetes cluster. If you've ever used Qube ADM, like it needs a CNI plugin to install or to installed for to implement that Kubernetes network model. So we'll install Cilium as well. We'll install the Cilium CLI tool and to install Cilium. Once we have the cluster fully ready, we'll go through the Kubernetes basics of a pod deployment service, et cetera. Then we will go through installing the day two tools to manage the cluster. Like we'll go through monitoring, we'll install Prometheus using a Helm chart. So we'll install Helm. Then we will install OPA, OPA gatekeeper for that policy enforcement as well. So you have a backup of YouTube videos. It's just in case. Some people have it up and running. I'll try it. Okay, perfect. So we have it running. Thank you. Oh, I think conference Wi-Fi is also hurting us from there. I see it come up and down, but let me try Chrome. So if you have it up and running, feel free to continue as well. Since some folks are having, I'm gonna start just playing some of the first video and see if we can. So this is just about just logging into the Hobby Farm scenario. So once you have your instances up, I'm glad here some people have their instances up and running. Let me see if mine ever come up yet. Okay, the doors is up too. Okay, yeah, let's. Thank you. We have VMs up. So on the right here, hopefully you have your virtual machines up and running. You have a tab for your Harbor instance and a tab for your Kubernetes virtual machine. Notice you have a public IP, private IP, hostname as well. The first several steps you're gonna go use are using the Harbor instance here. So let's make sure I could do commands. Let's go. Slow, slow. So most of this is gonna be clicked to run. Like I mentioned, you do have the steps saved on GitHub. So you could also get these steps to run on your own machines as well. The first step here, we are actually going to, didn't install, you didn't install Docker yet, right? Yeah. Okay. Okay, so just to walk through folks, if you haven't done this yet, we're going to install Docker, do a Wget to install Docker, run through the Docker installer. Then once the Docker installer is finished, you will add the user to the Docker group. So you can do Docker commands without doing pseudo Docker. Next, you will activate the changes to the Docker group with new group Docker. And then simple test run is just Docker version. Otherwise it will error out and you would have to do pseudo Docker version. Next step here, we'll create a very simple goaling application here. To do that, we need to install goaling. So we'll install goaling. So this will take a few minutes here. Throughout these steps, we'll do all the SHA-256 checks as well. So that's just best practice to do. So we did add those steps here as well. All right. That looks good. Then we'll extract goaling as well to the slash user slash local. The next step here is to add a Go's path. Go's path to your profile as well. So we'll click on this, this said. Then we'll actually go into our application installation. We'll make a directory called simple app. And this is just very, very simple. I actually wanted to make it more simple, but so this will install a web server and I'll do a user OS module to tell you what the host name is of this application. So let's test it with using Go Run. So this will take a minute. Once you see server is starting, you can see the host name is the host name of this virtual machine. There is a trick here. You have to do control C to cancel. All right, looks like it. Did that already? Next is that we will containerize this simple goaling application. We're gonna use a Docker file. And a Docker file is a text file on how to build your container image. Docker file always starts with a from statement here. We're gonna use the base image, goaling 1.21. We're gonna use the Alpine version of it, so it's a little bit smaller. The goaling base image is like 800 and something max and the Alpine version is about 200. With this Docker file, we're gonna set a label, some metadata for this container image. Project equals cloud data of essentials. Set where the working directory is to slash app. Copy some files over. Run, go mod and go build to create our application. Expose port 8080 and then start the container with the command simple out to run our application. So we're gonna use Docker build to create our application. So Docker build takes your Docker file and goes through line by line and creates your container image. Let's take a look to see if our image is available locally. So we do that with Docker image LS and we see that simple app image here with the tag 0.1 and our ID and it was created about seven minutes ago. We're gonna test running this container as this container image. So we'll do a Docker run, we'll give it the name simple app, otherwise it's gonna be a short hash for the name or it's gonna be a different, it's its own equation of a name, so which is the names from Docker are usually pretty funny. Expose port 8080 on the host and direct traffic to 8080 on the container. We'll run it in detached as well. So we are not in that container and once we stop it, it will remove that container. So that's what that dash dash RM option is and we are using the simple app container image. So we test it here as well. See our container image is nice and running. So let's stop this container with Docker simple app. Next, so we are going to install Harbor and Harbor requires for this version or this way of Harbor uses Docker compose. Like I said before, there's multiple ways to install any of these CNCF graduated projects. For this way of Harbor, we're going to use Docker compose. So let's install Docker compose. Before we actually proceed, so our setup of Harbor is not going to use HTTPS because there was many steps through open SSL to use HTTPS when you install Harbor. So in order to actually use Harbor as a container registry, not with only with HTTP, Docker actually requires all the container registry it pulls and pushes from has to be secure. So we actually have to create a file under a slash Etsy slash Docker files called daemon.json to actually add an insecure registries entry into that file. So we are going to click on this, the pseudo tee to create that slash Etsy slash Docker slash daemon.json. And we're going to add in an entry for insecure registries with our Harbor instance here. And notice that it should be your Harbor dot public IP and using SLIP.io. So once that's done, we have to reload and restart Docker. So let's do that with pseudo system CTL daemon reload and pseudo system CTL restart Docker. So next we could actually install Harbor. So once we have Docker and Docker compose up and running and also set up to use HTTP only, we can install Harbor. So we will do a W gets for the Harbor archive on GitHub. So once that's done, we will download the ASC file to check to make sure it is correct. And we'll obtain the public key for this ASC file. Then we will use the GPG command to check to see if our archive is correct. And we should see a good signature from Harbor sign here. So our archive is great. We could install, now we could extract the installer. So let's extract installer. So before we actually start up Harbor, there's a configuration file called Harbor.yaml and Harbor gives us a template for this Harbor.yaml. So we need to copy it over and create our own version of the Harbor.yaml configuration file. There's a few things we need to change in this Harbor.yaml as well. We need to change the default host name of where our Harbor instance is gonna be. So we're using said to change reg.mydomain.com into Harbor. Your public IP address.slip.io. And we also commented out the HTTPS portion as well with the last said command here. So next we'll actually use the install.sh to actually install and also start up Docker on Harbor. This will take a few minutes to pull the images. So now we're pulling various Harbor images that's required to run Harbor. And we'll wait for that to complete here. So we'll take a few minutes and do Docker PS just to check to see if we have a Harbor images running. Next is to actually log into Harbor from command line by using Docker to log in to the container registry. So we'll do Docker login in our Harbor URL, which is Harbor.publicip.slip.io. So we're gonna log in with the default credentials admin and Harbor 12345. So login succeeded, which is great. That means we can access our Harbor instance from our local or from this machine. Should be, it runs on 80. You should access through 80 for this instance. So we are going to, let me skip the step here. Yeah, so we will log into Harbor. So let's log into the UI of Harbor. There's a link and log into the UI of Harbor with the default credentials. We are going to create a project for our container image. So just click on new project and enter cloud native essentials. We are going to change the access level to public. We don't really need to because what that does is that we would have to do a Docker login from that machine to access this repository. So once we have our project up and installed into Harbor, we could now push our container image into that Harbor instance and into that repository. But before we do that, we have to give the tag or change the tag of our container image to also reflect the Harbor instance and the repository as well. So we'll do a Docker tag. We'll take the container image that we created simple app tag with 0.1 and we'll give it the tag of our Harbor instance, repo cloud native essentials. So we now will do a Docker push to push our container image into the instance of Harbor. All right, so check in your Harbor UI to see if that image is available. So we want to go back to Harbor, go to cloud native essentials and we see our simple app container image there. Next, let's pull this container image from that Harbor instance, but we want to remove the existing local container images for the simple app. So we'll do a Docker image RM for both container images that we currently have. Then we'll test it with a Docker run. And this time we'll use the instance on Harbor and the cloud native essentials repositories. So we'll do a Docker run, give it the same name, simple app, import 8080 to 8080, detach our item as well and we'll give it to that. This one I think I actually have an application, the going application is still running, so it may not run, but we still have the application running here. Then do a Docker stop simple app if your container was up and running to stop that container. All right, now that we've done the container basics and we've been able to build a container image, push it up to Harbor and pull it back down, we'll get into some of the Kubernetes requirements. First, as you see here, C-groups V2, IP table, Socat and contract, we'll install those, then we'll walk through and get Kubernetes installed. To begin with, we're gonna start with C-groups V2 here on the install, you see it on the left there. We're gonna run a command that's stat-fc command. You should see it return C-group 2 file system, C-group 2FS that's gonna show up there. If you see that, we're good. If you don't see that, well, you shouldn't see that. So that should all be good. Next we'll forward IPv4 and IP tables, bridge traffic. Go ahead and click to run there. What we're doing here is creating a file k8s.con for that modules load.d directory, putting in overlay and brnet filter. Scrolling down now a little bit, there's another click to run command, the mod probe command where we're going to install overlay and brnet filter as those kernel modules. Let's go ahead and run that. And then we have another command just below that to set up some syscontrol parameters there. And finally, we can put those into place with syscontrol-system. So that should spit out all the different parameters that we've changed there. Applying syscontrol.conf, you'll see a bunch of stuff come out. And then we can run lsmod and we'll grep for that brnet filter and overlay to make sure both of those kernel modules have been installed. So your output should look something like that where probably in red or a different color, you'll have highlighted the overlay and brnet filter. Scroll down a little bit and we will double check and make sure that our syscontrol parameters have been changed. So we'll check with syscontrol and put in those parameters. You'll see they're all set to one, so we're good to go there. Now we scroll down to the bottom and we have to install two additional tools, Socat and Contract. Socat helps redirect traffic within the Kubernetes cluster and we use Contract to track connection information between pods and services. So with an apt install, Socat and Contract-y, we can get those installed there. Moving on to the next step, in order for us to install Kubernetes, we obviously have to have a container runtime for that to work. So we're gonna use container D here and get that up and running. Step one has us download the actual container D GZAP tar ball there. So you'll see that pull down should be pretty quick and we'll pull also the SHA-256 sum and then we'll check that. Well you SHA-256 sum dash C and you should see an okay spit out there to compare that SHA sum to what we downloaded. If you don't see okay, scroll back up. You can click and download that tar ball again just to make sure that you've got it downloaded and cryptographically verified as a hash. So I'm looking for it. Next extract the container D archive into user local. So we'll run that pseudo tar command and we'll take out container D, the run C shim and all that stuff. So you'll see that expected output from the tar extract. Since we're gonna use system D to run container D, we're gonna download the container D dot service unit file. So click to run that pseudo W get on step number four. Once we've downloaded that, we're automatically placing it in that container D directory. So then we can just run system control Damon reload and finally enable dash dash now to start the container D container runtime. With that up and running, we'll need to install run C and also do the SHA-256 verification. So you can follow the steps here to W get run C, get its signature and then we'll do the pseudo install. Now that we've installed run C, container D uses a configuration file called config.toml to specify Damon level options. The config.toml file is gonna be located at that Etsy container D location. We'll create the location by using that maker dash P command and then container D provides a command for us, container D config default. So we'll execute that to get the default configuration option spit out into a format we can use and then we'll pipe those using T and create that in Etsy container D. What we'll need to do there in step 10 is set the system DC group driver with run C. So you see we have a said command that inline is going to change that for us in that config.toml file and finally we can use system control restart container D to take those changes, apply them and restart the container runtime. Once we've done that, we move on to our next step and we're gonna need to install, I think I went just ahead, I'm good. We're gonna need to install our container networking interface. Ray talked a little bit earlier, we're gonna use Cilium in this and we won't be able to have any communication between pods and services or the Kubelet and anything else before we install this container runtime or excuse me, this on networking interface. The first thing we'll do is set a couple of environment variables. We're gonna install version 130. We're using AMD 64 architecture and our destination is that ops CNI bin location. So we'll make that directory and then we'll click and download this. Once we've downloaded the CNI then we'll run the Kubernetes prerequisites called cry CTL cry control. This is a CLI for CRI compatible run times, container run times excuse me, of which container D is one. So we'll export a few more environment variables here, make that directory and then download that CRI CTL. Finally, down at the bottom you should see a cry cuddle cry control dash dash version and if you click to run that you should have an output cry cuddle version V1.28.0. Now that we have the CNI tool installed and we have cry cuddle installed we can do the Kubernetes installation. Once the Kubernetes installation is complete the pod still won't be able to talk to each other because we haven't yet installed Cilium. We just laid the groundwork for doing that in the previous step. So we'll start here by checking the latest Kubernetes version with this curl command and that's grabbing download.capes.io and that's stable.text file. We see 128.3 is our latest version. So let's take that 128.3 and put that along with our architecture and the release version that we specified there into these environment variables. We'll change directory into our downloader and then we'll actually download a bunch of Kubernetes tools, kubetium and kubelet specifically. We will run sudo chmod plus x and that's going to add the executable bit to those files and then we'll want to download the kubelet system D unit file so you can click to run that curl command there. We're going to use system D to run the kubelet so that will maintain its lifecycle for us, get it operational, have the watchdog and everything that system D brings. We'll make a directory for our kubelet and then we will put that content in there and run that other curl command. Expected output should download all that stuff with that unit file and then you should be able to run system control enable dash dash now kubelet and the sim link is created and now the kubelet is actually running. So at this point we have the kubelet installed on this node. It's going to be in a crash loop situation because it doesn't have an API server to talk to. There's no CNI installed, nothing like that. It's none of the pre-work visits have been met but we do have a kubelet here that's ready to start containers on this host once it gets connectivity to the Kubernetes cluster. So let's use kubectl and then in a few minutes we'll use kubetium to actually set that cluster up. To start with we'll install kubectl by downloading that binary. We'll download the checksum and then run that shot 256 sum to make sure the kubectl downloaded okay. Again if you don't get an okay there run the download one more time. Then we'll use pseudo install and we will install kubectl and we should be able to then finally run test kubectl and if we pass that dash dash client argument we're not going to reach out to the Kubernetes server so we won't have to connect to an API that isn't working at this point and by running kubectl version we see we've got 123 that's that latest version of Kubernetes. If we scroll down and we run kubectl cluster info we're going to get a bunch of messages here about not being able to connect to the Kubernetes API. We can't reach this host on this port. That is all expected. We do that to highlight the fact that again we have kubectl to interact with the Kubernetes API but the other end of that connection the kubeconfig file the actual Kubernetes API itself not up and running yet. The next step is where we will get those things going. So we click next and on step 12 finally create a cluster with kubadm. The first step we're going to do since we installed kubadm in the previous step is execute a dry run to make sure everything we've done up to this point is okay. So we execute kubadm in it we pass the socket that's CRI socket that kubadm is going to use to connect to our container runtime and actually start containers and configure the host. We pass the dry run argument just to make sure that everything is sane we're not going to apply any of these changes and then I appended and and echo dry run okay so if that previous command succeeds you should see dry run okay show up at the bottom. If you don't take a look back in some of the previous steps the error message that may or may not show up might have information for you but hopefully you've clicked all the same boxes we have so you should have dry run okay show at the bottom. If the dry run is successful then we basically do the exact same command on step number two there but without the dry run argument attached. This is actively going to communicate with that CRI socket start containers and get the kubernetes cluster up and running. We haven't yet installed the CNI we have cilium yet to go so once this kubernetes install completes we will be able to interact with the API but our node will remain in an unready status and we will not yet have pod to pod communication. So don't fear if you're stepping a few steps ahead of us and you're like oh no this isn't working we're gonna get there. So after just a minute or two we now see the kubernetes adm join command shows up at the bottom and that's what we could use to join other nodes if we had multiple nodes that we wanted to join to this cluster. You see it says follow the instructions from the end of the kubernetes adm init command to copy the kubernetes config file into a specific location we've automated that with the click to run. So you do that click to run command and it's gonna take our kubernetes config file put it into the dot kubernetes directory in your home directory on your system and then the kubernetes control tool automatically looks in that directory for that config file. Then step three we can use cube control cluster info and you click that and we see the kubernetes control plane is running at blah blah blah core DNS is running at blah blah blah to further diagnose problems you can do blah blah blah. So that means kubernetes control has now been able to access the kubernetes API. We have details about where that API lives where the core DNS endpoints are so we know we're able to talk to part of kubernetes. Now if we scroll down we want to click that taint of the node and untaint the control plane node there. You see the step cube control taint nodes dash dash all and we're gonna remove the taint on the control plane node. Finally if you click cube control get nodes you should see a single node that's showing up in our kubernetes cluster. Plane vanilla kubernetes you've got the IP address host name showing up there. Our status again is still not ready we have to get the pod networking going but you see the control plane roll there the age of the node and we're running version 128 three. If we click next now we'll do our psyllium install and get ready to rock with actually communicating between pods. First thing to do is install the latest psyllium CLI. So click to run at the top it's gonna export a few different environment variables download that psyllium CLI tool and then we can click to run psyllium install dash dash version 114 2 that's gonna auto detect the kubernetes API again using that cube config file and we're gonna install psyllium as the CNI into that cluster. Next thing to do is click on psyllium status dash dash wait. This command's gonna take a few minutes to complete and what's happening in the background here is the psyllium status dash dash wait tool excuse me argument to the psyllium tool is going to check through the kubernetes cluster and ensure that the psyllium pods have been started that they're correctly interfacing with the host that they're configuring things like IP tables rules that there's communication correctly between pods and you'll see here now after a few moments the output shows with the little colored thing there with the butterfly or whatever it is psyllium's okay the operator's okay everything else is good to go. Now if you scroll down just below that output on the left there's a cube control get nodes command you run that boom our node is now ready. So we've transitioned from a not ready state to a ready state just by getting that container networking interface up and running psyllium's installed so therefore the node's good to go it's ready to receive workloads. In fact if we click on cube control get pods dash and cube dash system which is a click to run there we see there's already a bunch of pods running in our cluster. We have the psyllium operator and the single node from the daemon set there we have core DNS for providing DNS services in the cluster FED is running for our storage and the cube API server and other various components that comprise the kubernetes control plane. Okay now that we've got our kubernetes cluster up and running let's learn about some of the basics creating a namespace deploying some workloads things like that. The first thing to do is create a kubernetes namespace a namespace isolates a group of resources within a kubernetes cluster so you can think about it as being a device for logical segmentation of resources so I can take pods services config map secrets whatever and I can group them together using this namespace construct. We'll use the click to run here to create our namespace and that should output then cube control create namespace my dash namespace. Once you've got our namespace created let's actually start a container inside that namespace so we'll run cube control run engine x and we pass a couple arguments there the image argument tells us what image we want to use and by virtue of not passing a specific repository we're going to use the docker hub to download that engine x image. We also pass a port argument that says kubernetes should expect traffic to arrive on this pod on port 80 and finally that dash end says hey kubernetes deploy that pod into the namespace we just created. You should see an output that looks like pod engine x created and then we have a command just below it where we can actually get the IP address of that pod inside of our cluster and run a curl command to access the engine x web server running inside that pod. The output then as you see we've done a little truncation using the head command welcome to engine x. If you actually wanted to see what the IP address of that pod was you could run a command like cube control get pods dash n my dash namespace and then dash o wide that is the wide output format. No resources found in oh I had a little typo we'll change that there and now you see the IP address of my single pod is 10 0 0 2 15. Yours may or may not be the same but you see how we're getting to that curl command and actually accessing the traffic. Finally on step number five let's delete the pod in the my dash namespace we wanna get rid of that because we're gonna move down to the next kubernetes concept which is a deployment. A deployment is a kubernetes resource that provides declarative updates at a controlled rate for pods. So this time we're gonna create a kubernetes manifest and use that to roll out our pod as part of this deployment instead of creating it directly. First let's make a manifest directory inside of our home dir on this workstation. Once we've done that there's a click to run file here. Before you click to run or you can do it now if you want you can take a look at the structure of that file. This describes a kubernetes resource. You'll see at the top the API version it describes that we're using the apps API which is a core API component from kubernetes and it's version one. The kind of resource we're describing here is a deployment that everything below that section is specific to this resource. So we're creating the nginx deployment you see the name there. We're putting it in the namespace of my dash namespace. We've added some labels to it to describe hey maybe we wanna look up all nginx related apps. We can use querying methods and actually kubernetes resources themselves can use querying methods based on those labels that we apply there. Underneath the spec section we describe how we want our pods to be created. So we're gonna create two replicas of this nginx pod. We have a selector there again with that label selection ability that we have. And finally in the template section we describe what the actual pod manifest should be. So you're not only creating a deployment using this manifest but you're describing inside the manifest the template or the manifest for the pod to actually use. So it's sort of one within the other kind of thing. Now that we've created that manifest file we can use kubectl apply which is the command to read in manifests and deploy them. And if you click to run that we see that the deployment nginx has been created. We'll scroll down a little bit and we can use kubectl get deployment. And we pass in dash n my dash namespace and we see we have a single deployment called nginx. There's two out of two ready. So up to date two available two. What that means is our replic account in the manifest was set to two. This means we want to have at any one time two pods running. We have two of two running so we're good to go. The deployment has done its job. It's reconciled. It's done that declarative configuration thing we touched on. Now that we've done that we scroll down. We can use get pods in the my namespace and there we see our two pods each with different IP addresses. Instead of accessing the nginx web server running in each one of these pods directly we're gonna use something called a service. A service defines a abstraction to group a number of pods together. You can think about it in a very, very simplistic way as being a static IP address. Pods are mortal, they live and die so we expect a pod to disappear at any one time. Another pod to take its place so we don't always wanna send traffic to a single pod if it's going to disappear. We can use a service instead to target a pod based again on that labeling mechanism we talked about in order to route traffic to those pods. So think about it as just a static reference, a static IP if you want. There are other types of services for groups of pods. We'll create a service manifest here so you can click to run on that file that's there. Once you've created that service manifest you can click to run kubectl apply and we'll input that service.yaml. With that nginx service created you can use kubectl get service and that shows that we have a node port service running. That means that on our Kubernetes node here we are exposing port in this case 30,000 and we should be able to add or excuse me send traffic from our workstations to port 30,000 and get an nginx pod to show up. In fact, there's a link kubernetes.dip address. If you click on that, welcome to nginx. And this is going to load balance, that service will automatically do it between the two different pods that are there. So if I decrease the replica count, if I increase the replica count, if one of them goes unavailable we should see the traffic automatically switch over each request to go to a different pod as availability determines. So we'll close that, hit next. All right, next we're going to install Helm. Helm is the package manager for Kubernetes and we're going to install and use Helm in the later step as well. So we're going to install Helm. We'll just do a curl command to get Helm. We'll do Helm version client just to test to see, make sure Helm is installed correctly. Yes, we do have Helm 3.13.1. And let's go to the next step, that was a short one. Next, we'll go into observability. So there's three columns of observability. We will touch on metrics first and we are going to use Helm to install Prometheus. We're actually going to use what's called the QPermetheus stack, which is a community maintained Helm chart to not just install Prometheus, but it's batteries included. So it installs the Prometheus operator and actually it gives you Prometheus rules as well. Alert manager for alerting. Grafana to visualize your metrics. We'll also install several things that are required for Prometheus to pull metrics like cube state metrics and the node exporter. So before we actually install the QPermetheus stack, we need to add the Helm repository, the Prometheus community Helm repository with the Helm repo add. Typically after you add in a Helm repository, you typically want to do Helm repo updates. Or if you're going to update your Helm chart, so you always want to do a Helm repo updates. Then we'll use Helm install to install the QPermetheus stack. We will do a Helm install QPermetheus stack from the Prometheus community's QPermetheus stack repository. We're going to set a few overrides with a dash, dash, set option. Dash, dash, set will override several options here like we'll create a node port services instead of cluster IP. So we can access several services from our local machine. So let's do a Helm install. This will take a few minutes here. I'll just go through some of this Helm install as well. We'll override, like I said, the node port type of service instead of cluster IP. We'll also create the namespace, the QPermetheus stack, where all the resources are going to be installed at. We'll also do a namespace override for that namespace as well. Same thing for Grafana as well. Same thing for Cube state metrics and the node exporter. So once the QPermetheus stack is finished installing, we'll check to see how the pods are doing and how are they running. This will take a few minutes or a minute to actually get the pods up and running. So we will use keep control, dash, dash, name space, QPermetheus stack, get pods, and we'll take a look at the Prometheus pods with the dash out label filter. So we have the pods running for Prometheus. Let's take a look at the services for the QPermetheus stack as well. So we'll do a keep control, get services in that QPermetheus stack. So that looks like our services are up and running. We have several node ports that we will use to access Prometheus. So let's click on the link that will to the port 390. And as you see from the output of that services, 390 is that node port for Prometheus. So we should have Prometheus up and running and should have access to it. We do give you a simple promql expressions here run as well. And this promql expression shows the total amount of CPU spent over the last five minutes. So we could run that quickly here and execute. So we should get it out. All right, going back to a hobby farm. So we could also take a look at Grafana as well. So let's click the link to Grafana, which is on port 30140. We do need to log into Grafana, which is admin and the operator password. It's still loading Grafana, so the pods are probably still loading here. So we'll give it a minute here. All right, so let's log into Grafana. And Grafana does come in with a number of built-in dashboards. Of course, you could always create your dashboard as well. You could also export and migrate dashboards. So they are just config maps. We'll go on the hamburger menu here. We'll go to dashboards. Let's go to Kubernetes and take a look at the cubelets. So here we could take a look, see that we have, in this cluster we have one cubelet. Of course we have a single node cluster. It's running 17 pods and Windows 17 pods is running 30 containers. And we could see some metrics here with this cubelet dashboard here. So the last step is policy. So we are going to install OPA gatekeeper. So of course, Open Policy Agents is a CNC graduate project gatekeeper. I find it easier to customize the webhook with using gatekeeper. And we'll take a look and see why that is. So we are going to install gatekeeper from using Helm as well. So we'll add the gatekeeper, we'll add the Open Policy Agents Helm repository, or Helm chart repository with Helm repo add. And we'll do Helm repo updates as well. Then we'll use Helm install to install gatekeeper. We'll give it the name, the namespace gatekeeper system. So once gatekeeper is up and running, the few things before we actually use gatekeeper, it uses what's called the constraint framework. With OPA, you actually write your policies in a language called Rego. So with Rego, we'll write our policy. In this case, we're going to have a required labels policies. So we're going to have a specified that certain resources in Kubernetes has to have a specific label. So for the constraint framework, you have to have what's called a constraint template for the template of that your policy. And the constraint actually applies that constraint template and actually where you define what type of resources you want to apply that constraint template and also what any other specifics on that constraint template you want to write as well. So let's install a constraint template. So this is for required labels here. The Rego is in the bottom here that says that we will require specific labels. So once that is installed, we'll do a few tests as well. We'll install the first one. It will install the constraints to enforce that policy on namespaces. And we want to have a label with a key owner. So any new namespace must have a label with a key owner. So let's test this constraint out. So we will attempt to create a namespace, but this time we will have a label of owner, owner equals you. And so with this, we expect the namespace to be created. Next, let's test this constraint again with a namespace without any labels. So we'll try to create a namespace with only no labels at all, just with a name. So now we get an error. So error from server forbidden. So this OPA gatekeeper has worked that it enforces our guard rails and our policy that says that you must have a label on namespaces with the key equals owner. All right, and that's it for our hands-on portion. So we have quite a bit of time, so we're here for questions and also any help for any folks that are stuck on any steps here. If you want to come up as well, feel free and we'll answer any questions. I know there's spikes around as well, but you can also feel free to come up to us. All right, thank you. All right, thank you for your time. We apologize for the issues that are early on, but the steps are on GitHub. The YouTube playlist is on GitHub. I mean, the link is on GitHub as well, so you can take a look to see how those scenarios will run. This scenario was only gonna be available until it's actually short-lived, so it'll probably be like 4.30 or 4 o'clock. But the steps are available to get read on any virtual machine to run for this hobbyist farm. So these virtual machines only stay up until 4.30, but if you go to github.com slash cloud native essentials, all the content, including the slides, is there? Yeah, all the slides, the hobby farm steps. The steps are in markdown formats. You can replicate them on whatever VM you'd like. It's a markdown, and you can actually see the actual hobby farm scenario as well. So you can see everything. And github.com slash cloud native essentials. Yeah, it's already there. It's already there in PowerPoint and PDF as well. I came in late. Can I still get into this start scenario?