 Good morning, good afternoon, everyone, depending on what time zone you're coming from. So welcome to my presentation on Living on the Edge, which is an idea that came up about a year and a half ago with a customer who was looking for a way to connect branch offices to a centralized place. Back then, they were trying to do it with OpenSec, and it didn't work very well. The customer ditched the idea, but I thought it had merit. So I did some development and playing around with it on my own. So here's a disclaimer, lawyers, you know. So the idea is to build small devices that can be placed pretty much everywhere. This here happens to be a Raspberry Pi, but it's not. I mean, this is a toy for development. The idea is, for instance, to have a rugged service that you can place in a branch office that can only have to be plugged in, where you do not have to have local personnel who doesn't know anything about computers as long as they can plug in a power outlet. And you have a way to network connect the whole thing, then it would be working. This thing has been, I've connected this at home at my hotel room everywhere. And the whole mechanism that is behind it works everywhere, everywhere. It doesn't require me to know the IP address of the device. It works from behind the firewall pretty much anything as long as they can create a network connection to it. So of course, if you are putting devices in places where they could potentially be attacked, there's always risk factors. So the idea is here also to protect from, for instance, a listener on the Wi-Fi, how weak Wi-Fi protocols are. So we want to create an environment that you can safely, where you can safely deploy software from your centralized environment to the Edge device or devices, hopefully many of them, and run them there without having to have any kind of IT knowledge in the area. So where could we use something like that? As we said, branch offices, for instance, point of sale software, something in this direction. Equipment sites, if you want to run simple network functionality on, let's say, wind turbine or something like that. Mobile sites, something that may have different connectivity depending on where you are. Set-top devices, if you were to build out something that is home control or something like that. The idea is really that we need to only have to have power and some form of network connectivity, something, and either 5G or Ethernet or Wi-Fi or whatever connectivity you have available in the area. And as the way that this is built at the moment, it does not have redundancy. The idea is if it should break, it's just going to be replaced with another device. So you could, of course, build a Kubernetes cluster. You could have multiple machines and tolerate failure of one. The downside to that, of course, is that then again, you have more complexity and you potentially do not have a way to address this complexity where the device actually lives. So the idea is to not trust the local network at all or trust it for only a traffic that does not have to do with the control of your Kubernetes environment. So the way that this is configured at the moment is the tunnel is absolute. So all the traffic goes through the tunnel and it will automatically get where it is. It cannot be, nobody can log in locally. Nobody can hijack the connectivity. This is unless, of course, you find a bug in an open VPN that allows you to get in there. But this is a risk that you always take, of course. So the thing is also, the idea is to have no login to the Edge device. There's no SSH keys on it, no SSH permissions so people cannot do the Edge devices only supposed to offer the service it is built for. Then we have local traffic optional. This would be, for instance, if we were to have a port that allows connectivity to things like a point of sales register or so. And then admin tooling that would be on the remote side of their environment. So what do we need? We only want power and we want to have some sort of networking. We want to be able to ship this to the customer. This is actually the software for this is on a USB key. This is not even a bad approach, even for something that is going to be used in an actual function instead of just as a demo. Although, in many cases, it would probably make more sense to have a USB drive because USB keys do not have any kind of error correction. So if this thing were to fail, we would replace it with a new one. If you want to update something on this cluster, update the software that you are running on it. It should be invisible to the user. There should not be anything locally done to make this happen. If you want to update the OS, of course, if you want to do the underlay of the Kubernetes cluster and the OS, the only way to do this is to replace the drive. But this is also something that you could normally let a person do who has no IT experience. And then, of course, backups would also go through the tunnel to the central site. So I've played with this for quite a while. I found that a lot of the descriptions of how this is supposed to work are not quite what they are supposed to be, including K0S, which is at the core of the whole thing. It works very well if you know how to do it. But if you follow the official deployment guide, there are a few things missing, which I'm going to talk about in a few minutes. So what can this run on any kind of single-board computer server, basically whatever you can ship somewhere that has a power cord that you can plug in? And I mean, for expeditions or so, it could be something like a rugged laptop or some sort of rugged computer. I just chose a Raspberry Pi because I had one lying around and because they are cheap and they're also useful for testing. I will provide a guide on how I did the deployment. So if you want to try this out on your own Raspberry Pi, there is really nothing to it that you need other than a Raspberry Pi plug and ideally a keyboard for it. So we have a Lightweight Kubernetes distribution. There is a bunch of them out there. I'm using K0S. This is our Lightweight Kubernetes distribution. And I'm familiar with it. I like the concept of being able to just download a single binary and have a Kubernetes cluster built directly from there. This is a story I should probably tell. So I did something boneheaded when I was just starting to work on this. I built everything and eventually I got to the Kubernetes distribution and I got the error message that the binary could not be run on the platform. So I looked and I finally found out that the platform was actually ARM v7 instead of ARM 64. So I thought, oh, that's going to cost me another two days to fix that. And I just went by the write up that I did. It took me less than an hour to actually make it work again from the time that I deployed to this or copied the image onto the stick. So it is really not difficult to do once you get around all the footfalls that you have. I'm basically just using Ubuntu 22.04. I'm using the Raspberry Pi image that allows to do this, but you can use anything at Palina, Edge, or whatever you're using for this. Open VPN. In our case, in this case, I'm VPNing into our company network because my central site is a virtual machine that's running on our corporate cloud. But you could simply put the whole thing onto a single machine that's sitting somewhere on the internet. And then I'm using Lens to just show that you can access the environment that you can build this. So the deployment. First of all, I used the Raspberry Pi image to build the image. It has built-in configuration. Wait a second. I may actually have this thing on here. So you basically can just choose an OS. Here, general purpose OS, you have Ubuntu. And you can basically just choose this here. So the installation is essentially pretty painless. The only thing that you need to set up, I recommend setting up the hostname and then Wi-Fi credentials and login credentials, if at least I needed those for the prototype. Theoretically, you should be able to eventually build this without. But it's not quite there yet. Then a boot to Raspberry Pi from the image make everything work. Configure OpenVPN. This is also a pretty common place. Just make sure that it is actually starting after the reboot. The first couple of times that I tried it after the reboot, the tunnel did not open. So it was a little bit of a fight there. But this should be fairly straightforward. And then you download and install the KZRS binary. This is essentially just you go to get.KZRS.8h. get.KZRS.NET, I think. I did that. Yeah, I'll dig that out and publish it. So the KZRS binary does essentially everything that allows you to build, deploy, undeploy a cluster. And trust me, it took me about 20 times to get it work. So on the same image, I stopped it, undeployed it, deployed it again, until I finally figured out what the problem was. And the biggest problem that I found was to make the tunnel interface work with the deployment. Because normally, it chooses an existing network interface, either the Ethernet interface or the WLAN interface. And the tunnel is not taken into account. So what you have to do is you have to create a configuration file that has the tunnel IP directly placed into it as the API IP. Because if you do not do this, then the API IP was going to be only available through the regular network interface. And that's obviously not what you want. So that configuration file, this is essentially pretty simple. Yeah, let me see whether I can find this. Yes, this might be a very good idea. So I think this is better. Oh, yeah. So why doesn't it work? Well, the easy, the simplest way is it is not actually plugged in. So let's see whether this will directly get us into our build. There we are. There we are. See, this takes over. The funny thing is that the user interface comes up or the shell comes up before it actually lets me log in. So this is that specification that is used with K0S as a configuration file. And the important piece is that API address thing. But I found, and this is something that I will have to address with the K0S team, is that theoretically it puts all the network interfaces into there. There's an additional segment that has all the other network interfaces. The downside to this, of course, is that these are not, there's no x5.09 certificate for them. And so the API access through that doesn't actually work. But as we do not want any other addresses to access our API anyway, this is the way to do it. So basically, you can even leave that extensions thing out, even the network thing out. This only needs to have the spec with the API. And that address is where the API is going to be reachable. And this needs to be the address of the tunnel that you are using. So OK, and continue here. OK, so the next step is to configure OpenVPN. You have typically OpenVPN looks like, so this is a standard OpenVPN file that you probably get from a network administrator or you create it when you are building your own tunnel in OpenVPN server. So that's that. And also after the reboot, verify the tunnel operation. It's important to make sure that it does come up again. So to get, physically, I built that at home. And when I built that, the tunnel always comes out, was built to come up with the same network IP address, internal tunnel, internal IP address. So when I moved it from there to my hotel room, the tunnel, the IP address in my hotel obviously was totally different. And also, obviously, you cannot either SSH or API connect into a hotel network. So that VPN tunnel essentially gets me out of the network and into my network so I can actually access it pretty much regardless where it is. So then download and install the KZRS binary. This is configured with this KZRS. Yeah, so this would be, for instance, the way that you configure this. This is basically the KZRS binary. And you downloaded this installed in user bin. And then you do use a local bin, I think. And then you do KZRS install controller dashed as single. The single is so you can run both Kubernetes master and Kubernetes work on the same node. And then the CKZRS.yml is so the configuration file is run to get your tunnel interface. And so let's see. So if you have a KZRS controller running, it's also configured the way that it will shut down and come up properly after the shutdown. And so if you were to talk to that directly on the node, you would see, so you can see that the Kubernetes serve is actually running. But of course, this is not very useful if you have to log into the server to make this happen. So the next thing is to build a service that started after all the other services on the system is extracting the Kubernetes configuration into a file and copy that file into our server. So basically, you would have, so this just creates a kube config from that Kubernetes cluster and then is copying it to here. So now we are on the main server. And the kube config is in kube config dash obsidian. This is the host name of this guy. So maybe we should look at all namespaces. Otherwise, we will not see the parts. Of course, you see here that you see the same parts. So once this thing is configured, once the drive is configured, you can just boot it up anywhere and it will automatically get your kube config from the existing system into here. And then you can go one step further. We have a temp rest anyway. I think I only have a few minutes left. Here I have actually loaded that kube config already into Lens so you can see that you can actually access Lens itself. But also this thing runs. The Lens provided monitoring. So if you look at cluster here, you can see the usage of CPU memory ports and so on. And the graphs that go with it. And then you could create a workload in there. Like just upload any kind of Kubernetes workload that you want to play with. So this is where we are at the moment. What's still open? First of all, I want to get to the point where this can all be done in a way that we can just clone these images, run a simple script on them to configure the network credentials and the hostname, and then basically deploy this without having to ever log into the system. No SSH, no nothing. And the other thing would be on the other side to build a CI CD environment that allows us to actually deploy to a random arbitrary number of Kubernetes servers that are somewhere out in the field. We do not know where. We do not want to know where. We just want to know, we just need to know the tunnel interface IPs, which they are going to tell us whenever they are booting up wherever they may be in the world. So I hope this was a little bit informational and a little bit instructional and a little bit of fun for you. And thank you very much. Any questions? This is something that I still need to do at the moment. This is not there yet, but I'm going to plan, I'm planning on putting this up on GitHub. And I'm also going to provide, after the show or in the next couple of days, provide a step-by-step list that I have made for myself to build this environment. Anything else? Yes? The idea for this would be that you are basically mass deploy workloads. Let's say you have 50 branch offices and they all need that same point of sale software that you want to deploy. If you can do this, of course, you can go to each of those edge sites and do this manually. And this is how a lot of companies still work. The technician drives to the place, installs the software, and makes things happen. But first of all, the technician costs money, quite a lot of money, actually. The second thing is that the technician only is available at a certain time. So if something goes to go wrong, so you cannot make a fix, put a fix in place until the technician has time to drive there. And the third thing is that also this is a fairly hefty security risk if you have, in general, operational risk if you have somebody manually at these commands. If you can automate it to the point where you really do not need to touch anything, you ship that device to the place they plug it in, they boot it up. It automatically falls in line and you deploy that point of sale software. And if something happens, for instance, Major Wuggers found in that software, you simply send the update through the same mechanism. That makes the larger your environment is and the more edge devices you have, the more money and time you will save to make this happen. So the second thing is these things are, you can basically build something that is reasonably cheap. So you can simply ship a replacement instead of having like an expensive service in stock. So you can ship the server out, let's say branch office 24 needs one and you are not going to have one in each branch office. These things, if you do it right, you can build for a couple hundred bucks. You can actually have a second one in case something goes catastrophically wrong to just take it out. It's like basically, like your Wi-Fi router at home, if something goes wrong, router gets destroyed. What do you do? You buy and you want to put it in. And this is kind of the same concept but to build something that is in a way disposable that you really do not have to worry about what do I do when this thing dies. When this thing dies, we are just gonna replace it. I'm not particularly fond of this throw away everything in society, but these things like that are reliable and if something should happen at some point, then it's probably easier for you and much, much cheaper to throw it away and simply put another device in its place than have somebody drive out there and try to fix it. So, yes? Deployment versus K-Zero? I have not looked at that. There's certainly different ways out there to do this. What made me use K-Zero first of all, this is, I know, Josie Numelin who is running the K-Zero development. So it was a little bit easier. I had to actually ask Josie two questions during the build because I got stuck in two places. So it was easier for me to make this happen. The second is of course, the idea here is a concept. This is not supposed, everyone is not supposed to build it the same way but it should show that you can build something that is so simple that basically, go with the Apple principle. You want to be able to just make it work and this is the idea here. If you want to do this with a different container distribution, absolutely. I mean, the idea was to essentially show that if you're with a little bit of effort, you can build a device that does not require any kind of local knowledge or local skill set. So it sees whatever idea that you decide at a big time? That's correct, but there's a downside to this. If you go somewhere like, let's say here, I would not be able to get to this from the outside. There's this behind the firewall from our hosts from Open Infra and they will suddenly not let me directly connect this to the internet so I can see an IP address. You would have to have some sort of netting so you would be able to see this device on the internet somehow. The idea with creating the VPN is that you can basically, VPN works from nearly anywhere unless the operator of the firewall actually actively blocks it. So that was the idea behind it and this is also the same problem that our customer had that they had Wi-Fi, some sort of random Wi-Fi routers in their place and having those Wi-Fi routers, you really have no way, unless you configure each and every one to actually let you in. This is the only way to actually get the IP address out there and allow you to get to get in. You can still do it with any other kind of methodology. Thank you.