 All right, all right. So yay, I actually have this problem all the time. I teach, no clap, we don't have time. I teach a class at Boston University and I have this issue all the time. Every single one of the students is a Mac user. There's like two Windows users, there's 30 kids and I have this problem every week I should have known but it's cute con, I mean come on, Linux. So my daughter drew this for me and this is all the stuff that's going around my head for the past year. This talk is all about everything I've been working on over the past year and it's a lot of different stuff, maybe too much but in some way it all comes together and that's what I want to talk about because it's all about Kubernetes because everything we do is cloud native now and I've been working with Kubernetes since 2015. I knew absolutely nothing when I started but I've worked my way up now. I am a principal software engineer. I work in emerging tech at Red Hat and I always say that I just glue things together. I take different tools, usually containerize and I figure out new ways to use them or new projects to bring together and that's kind of my specialty. So a lot of, I've been working a lot with the Edge and I've seen three main areas that seem like they're really not solved problems. I work with some customers, a lot of solutions architects and these three areas, how do you update your system at the Edge? How do you update your workloads running at the Edge and observability? So I've been working on all of these separately but I'm gonna attempt to show you a bit about putting them all together and we all know that open source has paved the road for edge computing and that Kubernetes is the backbone of the internet. That's what I tell my kids when they ask me what I do but Kubernetes at the Edge or EdgeWorkers in general, they have challenges that you don't have at traditional data centers and we all know this but it's the limited hardware, remote locations. The things I already mentioned, how do you scale? How do you update? The main idea is that your Edge operating system is best if it's mostly immutable and mostly read only. If you think about how containers took over the world years ago, it's kind of the same idea that everything you need for your workload is baked into the container image and extending that to the Edge works really well and at Red Hat we've been doing this for years with RPMOS tree so if you remember CoreOS joined Red Hat some years ago just about when we were updating OpenShift to OpenShift 4 and they had their container Linux operating system and Colin Walters who's like my hero at Red Hat, he's amazing, he had Project Atomic and they were both very similar ideas. They both were OS tree-based, container native, optimized for running containers, immutable, mostly read only or the couple of directories where you can save your state. This works the best for Edge and we used it as the host system for OpenShift, OpenShift 4 and it has been that way for many years but some exciting work, this stuff that I'm the most looking forward to working on this year in the coming months is really extending this concept of containerized operating system to the Edge. So there are some projects you might want to check out like Universal Blue, there are a few companies doing this Red Hat but if you're in the market for buying an operating system might I suggest Red Hat Device Edge because we have recently rolled this out and it is just that it's a RPMOS tree-based operating system that is best suited for Edge management. The thing I'm really looking forward to telling you is when you treat an operating system image like a container image that opens up all of the container OCI tools for updating your operating system such as container files or Docker files, you can simply with some RPMOS tree magic and there's a new project boot-see that you might check out also, you can set up a container file and start from your operating system image, do some updates and build the new commit. You can, where Kubernetes comes into this is I have a Kubernetes, I'm gonna try to show this, I don't know if we'll have time but nope, nevermind. We're gonna have to meet elsewhere because I just really, well, I'm just gonna tell you about it then. So I have two minutes left, thanks Benny. So Colin has this demo where he has auto update system deservice on his Edge host and he has completely locked it, he's removed sudo, cut off SSH and there's just a simple Edge workload running, a web server and what that leaves is a completely locked down Edge device where it can get its updates from a container registry. So later today I'll be talking about SIG Store and digital signing and so the demo was going to be showing a private SIG Store stack running on Kubernetes and you build your operating system image through whatever CICD pipeline you have. You sign, digitally sign it with SIG Store, you can set your policy at your Edge device to only be able to pull down those images that were signed by the identity and the service account that you expect and how RPMOS tree works is you pull down the image and it automatically updates your system and boots into the new operating system. It's fully, it's just as if you're starting up a new container, it's super cool, it's a bootable container image with the kernel inside. So workload updates are also a hairy subject with Edge, everybody does it a different way but the one thing is for sure is it has to be automated through a central location. So that's where Kubernetes definitely helps. You can take your automation platform of choice, I'm familiar with Ansible Automation Platform and it's all through container images again. When it gets difficult is when you start to introduce AI and we are definitely all starting to introduce AI and AI inferencing at the Edge, training your models at the data center. That's where you might want to step it up a notch and use Kubernetes native workflows at the Edge and that is where these low footprint Kubernetes distributions fit in. So Red Hat has micro shift that they're rolling out currently, it's a lightweight enterprise ready Kubernetes distribution, super cool and Ricardo will be with Lockheed Martin will be talking about that this week and so I definitely suggest you check it out. But there are others and being able to just seamlessly use your Kubernetes language from the Edge to the data center from your CI CD pipeline and roll it out on the Edge is important. Sure, if your Edge workload can be run with a few pod man or Docker containers in a system D unit file, great, that's probably best. But if you need more, then you might want to check out micro shift and I'm out of time but the last thing is observability. The key to observability at the Edge is the open telemetry project and the open telemetry collector. The open telemetry collector enables this single point of connection from your Edge workloads to a gateway on your central observability stack and it works really well and that's what a lot of our customers and technical architects and solution architects were all working on bringing solutions that are really centered on the open telemetry collector and the project. I do have a few more talks today on Kepler and the observability day and then OpenShift Commons today too so maybe you'll catch me there but please like ask me questions about anything you saw here and we can talk more, I apologize. Yep, thanks.