 Hello, everyone, and welcome to this session, running Kubernetes on a Raspberry Pi. My name's Andrew Prostke. I'm a SQL Server DBA, Microsoft Data Platform MVP, and Certified Kubernetes Administrator. I originally ran a small list to get Azure SQL Edge running on a Kubernetes cluster that was hosted on Raspberry Pi's, and I've blogged about it on the link there. If you want to check that out, you are more than welcome, but let's go ahead and let's start on how to configure our Raspberry Pi's. The first thing we need to do is flash the OS. Once you get your hands on a Raspberry Pi, plug it into your network, don't power it on yet, grab your SD card and grab a IMG file from the Ubuntu site. Now, you can pull that down, and once that's pulled down, you can flash your OS. Now, I'm using Rufus here, but Azure would work as well. Select the IMG file, hit Start, let that complete, and once that's complete, you can plug your SD card into your Raspberry Pi and power it on. Now, I'm going to have this short screenshot here of my Raspberry Pi cluster. I went with a four-node cluster, one control plane, and three nodes, but you can do all of this on just one Raspberry Pi. So let's have a look at, firstly, configuring our control plane node. So it's up and running, and there's a few steps that we need to do to get this working. We need to set a static IP address. I don't like using the default user for Ubuntu, which is the Ubuntu user, so we're going to disable that and create a custom user. Then I like to set up key-based authentication because I don't like typing in my passwords. Set a hostname for my Raspberry Pi. I went with kh-control-1. Enable this memory C group, which is required for installing Kubernetes, and then we can install our container runtime. Now, here I'm installing Docker, because it's a nice and easy app, get install-y-docker-io, enable it, and then grant my user access. Now, I realize that as of v1.20, Docker is deprecating as a container runtime. Container D or CRI-O are recommended now. And if you want to check out how to convert a cluster that's running Docker as a container runtime, to say container D, I've blogged about that on the link there. But for now, we're going to stay with Docker. So we have our OS configured, we have Docker installed, we can now go ahead and create, sorry, install Kubernetes. So the first thing we need to do is install some prerequisites. So we need app transport.https and curl. Then we can add the Kubernetes GPG key, and then we can add Kubernetes to the sources list. Now, I've noted there that Xenial isn't the code name for Ubuntu 20.04, which is the OS that I'm using, it's Focal. If you try and use your Kubernetes Focal in your sources list, you will get the error there. So to avoid that error, we're going to use Xenial. Now, once the Kubernetes sources is updated, we can run sudo apt-get update, and then we're good to go and install the Kubernetes components. So we're going to install the Kublet. Kubadmin, which we're going to use to install, and Kubctl, which we're going to use to manage the cluster. I've picked the older version there, 1.19.2, because when I built my cluster, I wanted to run through upgrading my Kubernetes cluster on a rolling race to 20 and then to 21. But you can definitely, definitely just go ahead and use the latest version. So we install those, and then we hold those components to prevent them from being accidentally upgraded. So we have the Kubernetes components installed on the box. Let's go and actually install Kubernetes itself using Kubadmin, and it's nice and simple. We can just say sudo kubadmin init and then tioutfile, so we can grab the output. And we should then get, after a couple of minutes, a message along like this. Our control plane has been initialized successfully, and we've got some error messages here to say what we need to do after the install. So we need to create a .cube file in our home directory so we can drop a config file there and connect into our cluster. We need to deploy a pod network, and then it's given us information about how we can join worker nodes to our cluster. But let's go ahead and let's firstly configure kube control. So exactly the same as in the message output when we installed, we create a .cube directory, we copy the admin.conf file into it as a config file and change the permissions on it. And once we've done that, we can then run kube control get nodes and we'll see our control plane node there. So we can connect into our cluster. The next thing to do is install a pod network. And I'm going to use weave here. Nice and simple, we just say kube control apply dash f and that URL there, and we grab the Kubernetes version. Once we hit that, we'll see it configure and build all the required components to set up our pod network. So we can see a service account cluster roles, cluster role bindings, and a demon set there as well. So now if we run kube control get nodes, we'll see our control plane node there with a status of ready, we are good to go. We can dive in a little bit further by running kube control get pods dash n kube system, list out all the system pods in our kube system namespace. And we'll see the DNS pods, XCD controller API server, proxy scheduler and our weave pods there as well. So we've got Kubernetes up and running. Let's go ahead and let's deploy a pod to it. So the first thing we need to do is remove the taint from the control plane node by default control plane nodes are tainted so you can't run user pods on them. So we run this kube control taint with a little dash at the end and that will remove the taint for it. And now we can run a pod. So I'm going to run a really simple one just a kube control run engine X from the image engine X. Give that a couple of seconds after it's run and then we can run kube control get pods dash output wide. Dash out wide just gives us a little bit more information about the pods and the reason I'm doing this is because I want that pod IP address. And then we can see there that engine X is up and running with the status of running and we can grab that IP address. And as I'm on my control plane node, I can run curl against the IP address and I get the default engine X loaded page back and I can confirm that I can connect in to my engine X pod from within my control plane node. Okay, so we've got a kubernetes cluster running on a Raspberry Pi and we can deploy an application to it running in a pod. And once we're on the control plane node we can connect into that pod and pull back information the default page for engine X. But we also want to be able to connect to our applications from outside of our cluster. So to do that, what we're going to do is deploy a load balance service that would give us an external IP address. But as I'm running on a Raspberry Pi I don't have a load balancer. So what I can do is install something called Metal LB. And what Metal LB will do is create a load of components that will give me my external IP address. So the first thing we do is deploy it, say kubectl apply and then hit that URL and we'll see it create a whole bunch of resources that it needs on our cluster. So we get a namespace, service accounts, cluster roles, cluster role bindings and a demon set and a deployment. Once those are all deployed we then need to configure Metal LB. So what we do here is we create a config file at the bottom of that config file we specify a range of IP addresses. And these are the range of IP addresses that our external load balance services will get as their external IP. Hit apply, let that go and then confirm the pods in the Metal LB system. So if you have a look here, we'll see both pods have a status of running and now we can test creating a load balance service and see if it gets that external IP. So we can expose our pod, type of load balancer, a port and a target port and then we confirm that service kubectl get services. And if everything's gone well we should see that service have an external IP which we can then jump out of our cluster, go to somewhere that's outside of our cluster, run a curl against that and hopefully if everything's working we should see the default engine X page. So we have our Kubernetes cluster, we've deployed a pod and by using Metal LB we can get an external IP address on a load balance service which we can use to connect applications running in our cluster from outside of our cluster. Okay, so that's pretty good. We can now go ahead and if we want to expand our cluster we can go ahead and start joining nodes to our cluster. So we need to configure the OS and install Kubernetes exactly the same way as we did on our control node. And then we need to add entries in the host file for all the nodes that are gonna be in our cluster for all the other nodes. So they can talk to each other via their host name and not just IP address. And then we can join the node to the cluster with kubadmin. Now this was in the message when we initially created our cluster. So we just grab that, say kubadmin join control node name, port and then the token and the discovery token. Jump that in and hopefully if everything goes well we'll get an output like this confirming that our node is in the cluster. Give it a couple of seconds and if we run kubectl get nodes we have our nodes in our cluster. And there's my Raspberry Pi cluster there. I've got my control plane and my three worker nodes. So we could do a whole bunch of other cool stuff as well. I'm just pointing out here we could buy a really cool case to put our Raspberry Pi's in. That's actually a case for my NFS storage Raspberry Pi that I used provision persistent volumes into my Raspberry Pi cluster. And if you have a look at my completed setup I kind of went a little bit overboard if I'm honest. I have my Raspberry Pi cluster, my NFS server. Behind that I have my Pi VPN server. I have a Pi hole in there and I have a Pi tablet. That's just outputting some information about my Pi hole. And I am very, very much banned from buying any more Raspberry Pi's. Anyway, I hope that's been helpful. If you have any questions, please contact me. My name's Andrew Praskeem at DBA from the cold on Twitter. Thank you so much for watching.