 I will talk about OpenShift plus Fedora plus Virtual Cubelets plus Raspberry Pi and Podman and I will ask, like, if people know a familiar with their components, we'll just skip it and we'll go pass through and I really hope that demo will work, like, next time when doing demos with the internet, better request the room with the windows where you can teach us some internet. Cool. So, agenda, short introduction, pre-start hooks, explain what the components means in the context of this talk, some demo and questions. So, I'm MJ. You can try to pronounce my name, but good luck. I think Czech people might be more lucky in this area. I've worked for Reducat for three years, mainly cloud managed services Azure, whatever. So, how all this started? So, there was this event, Reducat Tech Exchange and there was a field hackathon and one of their colleagues came and said, like, hey, I have this top secret customer. They want to hook up a bunch of ARM workloads to the Kubernetes cluster to run basically unit tests on ARM hardware. And can we do it? Like, yeah, we have a hackathon, five hours, a few beers, and we managed to make it running in the most hackiest way with the three lines of code with a virtual Kubelet project. And we won. After that, conclusion was I need to formalize it a little bit better because winning something with three lines of code, it's fine, but I didn't feel dry. And I always wanted to have, like, side project to screen showing something at home on the wall and I want to schedule it from Kube cluster because I have one. So, I ended up with this. So, who knows what is virtual Kubelet? Cool. Okay. So, if to simplify, virtual Kubelet is a framework which helps you to masquerade the Kubelet itself. So, it presents itself, the code base basically allows to present itself to the Kubernetes cluster as a node. And what happens under the hood? It's up to you. You can do basically whatever you want. So, as it's shown here, it has basic functionalities of the Kubelet, but at the same time, it does not have all the constraints which normal Kubelet will have. So, you don't need, like, networking, you don't need resource constraints. It can be basically anything what can run goal-length binary, from your mobile phone to, like, Raspberry Pi or any other edge device. Okay. Going further. So, main things what Kubelet, virtual Kubelet can do, but the framework basically hands you over is you can create the leap pods, get some status, get some logs if you have implementations, like, capacities of all the bare minimum what you would get from the normal Kubelet. And you can bring your own network, you can hook up to existing networks, all those things. Cool. And this is how the interface looks of the virtual Kubelet. So, any way you could code, that would be straightforward. So, as long as you can implement those methods on your edge device or any hardware, basically, as soon as you create pod on Kubernetes cluster, create pod method will be called on virtual Kubelet framework and you can basically blink lights on the pod creation, any of those things. Okay. Anybody doesn't know what this guy is? Yeah. Okay. That's Raspberry Pi, the generation free. Just, I don't use Raspberry Pi 4 just because Fedora ARM does not yet support basically Raspberry 4, but I hope that will come. Okay. Podman, after a few days, we don't need to introduce this guy to you. Like, we had so many talks about that. So, I try to use Docker that basically umbed my devices all the time. Like, I was not able to do anything. You're in a container, it's just like, um. I tried to use Cryo. There was some package, basically, conflicts. I just, like, looked through our internal mailing list and one guy asking, like, hey, I'm using, I want to run containers on my Raspberry Pi and basically random then email appeared and it's like, hey, did you try to use Podman? Like, DNF install worked out of the box next. And I use Warlink Podman API for communication for now. I will be happy to get rid of it. So, Fedora ARM, again, it's just a basically Fedora version for ARM compiled where, like, Raspberry Pi and Fedora is not the best friends. Debian is better, but I'm a fanboy of Fedora. And I know all the commands. I know how everything works. So, if something doesn't work, I know where to look. So, then, with this. And as a Kubernetes distribution, I use OpenShift because basically that's my bread and butter each and every day. I can have clusters running. I can spin it up for my personal work. It's just easy. Okay. So, now the fun part starts. So, if you're looking to virtual kubelet, virtual kubelet can work into two modes. Pool, basically where you're running your virtual kubelet itself on an edge device somewhere in China or whatever and it just goes to your cluster with enough credentials and pulls data saying, like, hey, and this node one, give me something. Give me a workload. This is very good when your workloads are distributed and you don't have any service discovery. You don't know where we are. We just come up with random IP addresses, random network connections and just get some stuff. Another one is push, which is most of the providers are using, where you're running a virtual kubelet as a pod on the cluster itself and it goes to edge devices via SSH, API, any other protocols. In this case, you have to know where to go. So, some kind of service discovery or hard coded networking or something like that. Okay. So, what I did, I used the mode one, basically. So, I'm running everything on my Raspberry Pi just because then I connected at home to my Wi-Fi each and every time it gets new IP address and I just didn't bother to implement any service discovery or all those things. So, this guy here, if it's still running, yes. It's running virtual kubelet binary. It's like 37 megs. It's running pod mine and varling as a runtime, container runtime and it hooks up to my OpenShift cluster in East US, basically, somewhere close to, I don't know, in the Azure data center. And I have basically screen connected just to show stuff. Cool. So, with no more waiting, let's see what we can do. So, here I have, I'm in the Raspberry Pi itself. So, let's show. So, that's Fedora 30. Again, 31 doesn't work on this one. And I have pod mine installed. So, let's watch for containers and see that nothing is running here. And this is my OpenShift cluster. So, if I do OC get nodes, I see that there is default nodes connected to that. And last one is name of this Raspberry Pi. It has a kube config. So, each and every time I plug it to the power, it just comes up, use a kube config, registers to the cluster and comes up with a, as a node, as an agent. And everything here, like version, roles, everything is, it's just simply like a moment hard coded in the back end and you can do whatever you want. So, I have this container one, which is buzzy box. It does not do much. It just basically sleeps for 30 seconds. So, let's try to do that. So, pod example. So, now it goes to East US. Data center basically schedules the pod. Pod pokes the cluster. Raspberry Pi keeps poking the cluster, finds the work pending pod for him to be running and just schedules as normal kubelet. And we see it's basically sleep 30 seconds for some time. So, I still have to talk about for seven seconds until it dies. And let's hope it just kills two. So, it should went away. And we can see that basically exit code was zero because the pod just died gradually. And that got updated in our cube cluster. So, bare minimum functionality. So, I have a different pod, which is more fun. So, I have implemented minimum functionality where you can mount a host path and expose some stuff. So, I have a Chromium browser built for ARM. I build it as a container and the same basically Raspberry Pi. I'm mounting some volume jurors just for hacking purposes. And I'm hooking to accession of the logged in user which is here. So, I could show something. And I'm passing basically parameters with what I want to open. So, create dash F, examples, browser. And I really hope that that stuff shows up here now. It takes a while. And we have a dashboard. So, we can put anything here basically. You can do any fancy stuff, anything for those who don't see. It's like just popped up there. So, if I now delete that stuff, it should go down. I hope. See, delete. And it just went away. So, in theory, if you run this virtual Cubelet on your node itself as a binary, you could be SSH API or any other API. And if somebody would be doing like Chorus Fedora, which edge nodes could update themselves, you could have ecosystem where the node just needs some credentials from some authority and could do basically anything what your clusters can do. Cool. So, what's next for this hacky project? I think one more weekend is needed. I have zero tests that was hacking. And most of the time I spent basically trying to find out how the API works of the bar link. There are some edge cases which took me a while to debug. Move to Podman API V2, which would be like Docker compatible API, which enables better ecosystem, more libraries to do, and I think easier. Reuse code from Podman Cubelet and Podman Cube Generate, because now I have my own small converters. And because that code is already implemented, I just want to bring it in, and that will enable so many features. Remote Podman Management. So, I want to run that stuff on my cluster and just hook the API to the nodes itself. So, I don't need to basically SCP my binary, virtual cube binary, each and every time we do something. Secrets, config maps, volumes already supported so we could do those minor stuff. Edge Sage logs, they'll be an example just because in example I have three point, like smaller three point five inch screen. That guide doesn't work with Adora. It needs driver porting. And what I did it once, next upgrade basically broke it and it just ecosystem and better bootstrapping for the node so you can just bake the card on your laptop, hook it in, and you're done. Cool. So, that's basically it. So, this is where you can find me. If you have any questions, you want to try it, you want to play it. The repository is under virtual Cubelet organization slash Podman. It has bare minimum like read me files and all the code. So, you can try it, you can pull it, check it out. And, yeah. Yeah. I don't know at this point in time. So, what's happening here is so it's honestly wouldn't say the spec because it's a bare minimum. I think it takes more when you start running from it's something like it's like, again, would not be able to say like metrics or something like that. It depends on the implementation how your code will do that because that's everything on your control. So, I used to run this dashboard at home and I just basically dropped the cluster off for two days, three days. And as long as the Podman running, it runs, you can implement the virtual Cubelet code itself to cache some of the data and just spin it up from the cache until it's lost the connectivity. And as long as it has the data, it just can do that. So, currently what I'm doing, then Kubernetes asks for like status, like what's the stuff? It has to return the data. So, it just hash all the Podman spec, like Kubernetes spec. And I put it to the tax annotations to the Podman pod. And then it reads, it's basically remarshals it enters the data and that's all. So, it's very, very specific on the implementation itself, I would say. It just go. Cool. And I have a question. Cubelet is too heavy. It has some dependencies like networking or like story stuff. It's just way, way heavier and more complex for the use case like that. And I needed, I wanted to try this, basically, as I said, that project. It just, all the bits and pieces fell in the one place in one bucket. And I just like, okay, let's do it. Install it. It didn't work. I swapped the card to Fedora Ferty and continued on my weekend. And there's basically a note in the Fedora ARM page saying like, we don't support it until minimum hardware will be supported like VGI's and like outside outputs and all this stuff. I said, there's mailing lists, I think, internal flying about that. But again, as my first flight implied, I have no idea what I'm doing. I just needed to like complete it, then it show it and that's done. Yeah. So now it's basically after Scott's talk, I saw that there's a lot of code base I can reuse. So I would do that. I would start from the basic running virtual cubelet on the, like on the Kubernetes cluster itself, not on edge device, basically making a push model and potentially have some shim layer for the discovery. And the code itself is a mess. It's just like more you see, more you learn. It's like, no. If nothing else, okay, one more. No, not yet. That again, that learnings from these two days before that I should. It's one of those things when you like, you Google Fedora ARM, you're getting Fedora for ARM and you go over it. You don't Google Fedora IoT. I think we need to change or buy some ads in certain pages to point people to the right directions. Cool. Thanks, everybody.