 Thanks everyone for joining this session on edge computing using keys on raspberry pi. My name is Jeff sparr. I'm a senior engineering manager at lendova. So let's start with what is edge computing. So when you're bringing compute and storage closer to the source of the data. This is typically sites that are not data centers. So it could be factory floors, vehicles, retail stores or restaurants, wind farms. It could be constraints around space power or cost. And what makes this project edge computing. We've got a small form factor. We've got the raspberry pi which is about the size of a wallet. We have low power consumption with the arm 64 processor. And keys has some abilities to bootstrap applications without an internet connection as well. So why raspberry pi. Well the latest one comes in three sizes there's the two gig four gig and eight gig option that my inflection point for getting into this was when the eight gig option came out about mid 2020. For me that was, yep, that's enough to go ahead and run a Kubernetes node with the applications I'm interested in. So this will be a fun home lab project. They all have the same processor to quad core arm 64. They're small and quiet. They're inexpensive, and they have low power consumption. This was important for me because I've always kind of had interest in having a home lab but I don't have a ton of space. If I ever do the, the, the calculations of what the price would be to run a real server for an entire year in my own. That's usually enough to tell me that I'm not going to do that project and move on to something else. So this was this was a good opportunity to get into this for me. Why keys. That's K3S pronounced keys by the way. Well it's packaged as a single binary so that makes it pretty easy to install it also gives it a smaller memory footprint. It comes with things out of the box that I needed like local storage provider and traffic ingress controller. It has really good arm 64 support. It's a very active project with a large user base, and it's backed by a rancher so it's probably not going anywhere. For the parts list. So I started with three raspberry pies. Three SD cards. I had a use case where I need more capacity here so that here's a place where you can save some money if you don't need quite as big of an SD card. Three power supplies. If you look around the internet on this, most people recommend not to use a just a USB C phone charger that you have lying around, because you're not guaranteed to have the consistent voltage that you need. So I just went ahead and didn't want to deal with that. So I bought the official power supplies about one case and a couple micro HDMI HDMI adapters total came in just under $400. I bought this in pieces to if I did it all at once I probably would have thought twice because I didn't realize I spent $400 on this present. So it's relatively inexpensive compared to, you know, buying three, three servers and and doing anything with them. Core project goals. For me, I wanted to get to the Kubernetes API as fast as possible. Every, every layer below that was slowing me down. It's important to note that this was my goal. Spend time on the lower layers if you haven't already. I wanted to be able to take nodes out, whether it's for an upgrade or, you know, to swap it out because there was a hardware failure without disrupting the application I have running on there. I wanted it to be relatively inexpensive. I wanted to contribute back along the way if there was an opportunity. And I wanted to capture everything as code so this is reproducible, both for myself as well as others who are interested in getting into this. So step one was create an image. I used Ubuntu. There's really good arm64 support with Ubuntu. There's a tutorial linked to how to image your SD card around this. But basically, I would use an amac at the time to do this. List your volumes. Unmount the one that's the SD card and do this little one liner where you're unzipping the image and DDing it to that SD card. It was important to me to have this just join the network and get an IP address on boot because I didn't want to have to hook each one up to a monitor and keyboard and configure network. So I use net plan to do this. Pretty simple configuration just here's your, the name of your access point. Here's the secret to join. And then the OS bootstrapping super minimal. I only have five ansible tasks. And we'll go through here and we'll start running it while I walk through what the files are. So only one of these is actually required to run keys. You'll see the first thing I'm doing is just setting the host name. Then I'm adding my user saying don't require a password for sudoing with my user. I'm putting my SSH key on there. And this is a pretty useful trick. If you don't know about it already is you can just grab those off of GitHub. And then this was the important one enabling C group and this this boot option. That that's required in order to run keys or Kubernetes in general. So I can see this is this is going through and running it's already been through one node. So let's start the third node. And after it finishes the third node, the next thing that we'll do is actually install keys. So to install keys I use this project called catch up. This is straight out of the read me it's a lightweight utility to get from zero to keep config with keys. All you need is SSH and the catch up binary. This made things really simple. I'll walk through what this does. And we can see my ansible playbook finished to save some time. Let's go ahead and start this keys install, and I'll walk through what the install file does. So really simple bascript just wrapping the wrapping catch up. Here's my Kubernetes version. Here's my, my three different nodes. Here's the user I want you to connect as the first one catch up install. And the second node is catch up join. And the third node same thing catch up join. And at the end of here remind you to set your cube config and tell you where it is. So over here I can see that's running. You can follow along. It's grabbing the binary. It's installing it to user local bin keys. I'll send linking some utilities that you'll need two keys because I mentioned it's a single binary. It's already done one node. It's moving on to the second mentioned the second one is a join versus an install. And it'll do the same for the third as well, setting up the unit file. And then it's going to start keys at the end. So while that's running, I'll talk through some of the bootstrapping mechanisms that you can use. So any manifest or or helm chart that that sits in this folder on the server bar lib rancher keys server manifest will get applied as if it was a cube CTL apply dash F. So this is this is all stuff that comes out of the box core DNS mentioned that local storage provider metric server and that traffic ingress controller. But you could use this if you wanted to to add your own manifests for things that you wanted to bootstrap. And then if you do have constraints around not having an internet connection. There are some tricks you can do with CRI CTL to manually unpack an image you'll do that on each of the nodes. Here's a link to go ahead and do that. That'll work. It's probably a better option to have a private registry. This is a little bit hacky, but it might work for your use case. Check back in on the install. It's on the third node. And it's just starting the keys service. And when that's done, we'll walk through what a node upgrade looks like. So for node upgrades. I wanted to be able to do this through the Kubernetes API, if possible. I didn't want to know SSH to the server and do some things and rely on Ansible to do this. So, luckily, this problem was already solved. Rancher has a controller called system upgrade controller. And that is it's got a controller and it's looking for plans. And when it sees a plan that it needs to act on, then it will go ahead and run that job and go through each node and do the upgrade and the way that you described it. It looks like we're almost done. That was the second node. Not the third one. So we're on the third one now. Don't want to get too far ahead should start pretty soon. Okay, for that, we can talk about some other things that I considered to be core components. Nope, it's done. We'll come back. So we're done. We've got a kube config file that it dumped here. So let's export that. And then let's do kubectl get nodes wide. And we'll see we've got three nodes. They're all control plane nodes. They all have that role. The version here is 121.2, which is what we specified in that install file. Here's their different IPs. Here's proof that we're running Ubuntu. And you know the kernel version and container version. That's exciting. That was pretty low effort to get a three node Kubernetes cluster out the door. Some of you all might notice the age of these is more than just, you know, a couple of minutes. That's because I've already gone through this several times, but you can rerun the Ansible playbooks and you can rerun the install and it'll actually go through and do it all again. So node upgrades. We talked through the system upgrade controller. Let me show you what a plan looks like. So here we have an upgrade plan that includes a server, which is, which is the control plane node and keys terms as well as an agent, which is just a worker node. For this, for my cluster, we really, we only have servers. They're all control plane nodes. But I included them both in here in case that changes. So what we're going to do is we're just going to rev up this version number. It was 121.2. Let's change it to 121.3. We'll save that and we'll apply that plan. And this is a CRD kind plan. And that will show up as a job. Here's the job right here. And as that job starts running, we should see that cycle through each of the Kubernetes. Okay, so this is actually a good sign. You see that my connections to the API server failed. This is something that I'll talk about. And one of my next slides, it's the only piece that's not HA. So this is HA in the sense that there's three control plane nodes, they're all running at CD, and that's that's clustered. If one fails, then, you know, jobs will continue to run on other nodes. The piece that I still need to get working is DNS for the Kubernetes API currently just points to that first note. So there's a there's a project out there called QVIP that will will help with that it can do things like our between nodes. So we can move that IP if that node goes down. That means that node's done already. That was that first node, you can see now we've got version 1.21.3. And we can look at that job, it's still running. So one job is complete. We've got another job running. And it will cycle through each of these nodes until they're all version 1.21.3. So that's pretty neat. It's a way where we can manage all of our all of our over the top Kubernetes, you know, management is done through the Kubernetes API. So we'll come back and look at that again and see how it's progressed. So for what I consider to be core components included this all in this manifest folder and this repo. So these are all things that I think any Kubernetes cluster should have at a bare minimum. My original intent was to watch this folder with our CD. So we'd have a get ups flow where I'd upgrade or update the repo and then that would automatically get applied to the cluster. So I'm going to be coming back to that or go CD doesn't have an arm 64 image yet, but it's pretty close. So once that's available, I'll pick up that piece of the project. So for now going to keep CTL apply to these files. I wanted to avoid shortcuts. So I'm using real DNS and I've got external DNS, which is a pretty useful controller that plums into my DNS provider. So as I create, you know, ingress resources or that sort of thing, it's going to also create a DNS record for them. Also using cert manager to make sure I get real certs. It's so easy these days with cert manager and let's encrypt to get real certs that there's, you know, kind of no reason not to go down that route. That will let you avoid even for your home lab stuff to not click through browser warnings. Which is actually pretty nice. Let's check on our upgrade job. So we can see that second node has now been upgraded. We've got version 1.21.3. You'll notice that also includes a container runtime update. I'm going to wait for the two newer ones. The old ones still 1.4.4. So what's next for this project. So I mentioned QVIP. I need to finish that up in order to have, you know, API server failover and have this truly HA. That's that's already out and available. I just, I just ran out of time. So I'll come back to that and upgrade this, update this repo once I have that working. Argo CD arm 64 support. I started this project over six months ago. Now they're they're very close to to having arm 64 support built into their build process. That's coming in version 2.2 and they're on version 2.1 now. And that'll make this a full, you know, get off style of management. I just want to spend some time looking at something like tinkerbell to bootstrap over the network instead of imaging an SD card. That's a fairly manual process of, you know, taking an SD card, plug it into your laptop. You know, running that that DD command to image it and then putting it back in your Raspberry Pi and hooking up your Raspberry Pi. And if you were doing this at scale, you would probably want something, you know, like a pixie boot and OS install, which, which tinkerbell is a product out there to help manage some of that. Let's go back and check on these upgrades. It's like it's still going. The third node did move to not ready. So that's probably restarting. There we go. All three of them have been updated. We've got version 1.21.3. 4.8. So pretty, pretty smooth process of upgrading your Kubernetes cluster. I forgot to mention, you can also do that for the operating system or any other arbitrary package as well. I'm not going to run through that in the interest of time. But let's take a look at what one of those plans look like. So this is for upgrading the OS. I had this arbitrary version number since all I'm doing is an apt upgrade over here. The server upgrade controller just needs to have a version number that's changed so it knows that it needs to act on this. So this was the best way I could think of to solve that for now. But if I was to change that to 01.5, it would say hey, it's I've got to apply this new plan. And it would run this upgrade.sh script, which just does a apt update. It installs this firmware updater for Raspberry Pi, which is really useful to have this as part of the script. And then does an apt full upgrade and does a reboot if required. So same thing, OS updates I managed to the Kubernetes API. So thank you, appreciate your time and come into this session, including a GitHub link here for that repo that should be pretty out of the box. You can you can follow that read me and you'll end up with a three node cluster with all these over the top components sitting in that manifest cluster or folder as well. I linked in profile feel free to reach out. And then I'm also hiring. So here's the link to the current open job description. Feel free to reach out to me directly as well as apply for that job through this portal. So thank you everyone. Let's move on to the live Q&A session. Have a great day.