 Thank you for joining me today at Kubernetes on the Edge Day. I'm Medrick and I'm a PM at Microsoft. Today, I'll be talking about Ocari, an open-source project that my team and I have been working on over the past year, and tackles the issue of using non-Kubernetes devices in Kubernetes. To get things started, have you ever looked at a device and said, gee, I wonder if you could run Kubernetes on it, then it can help me run some of my workloads? Speaking for myself, I asked myself this question at least twice a day, but as some of my colleagues have pointed out to me, just because you think you can put Kubernetes on it, doesn't mean it's a good idea. However, there was something there in that question that put the sparkle in my team's eyes, and we went down the rabbit hole of looking at computing on the edge and the various devices and form factors that exist. This led to the inception of Project Ocari, and I'll walk through our journey and thinking around this space. As a bit of background, we started off with the things that we know about. My team has been working on edge computing in IoT for a while now, and the one thing that we can all agree on is that edge computing is difficult. Unlike the cloud where there are amorphous resources that can be easily spun up, torn down, allocated, deallocated, reallocated with an uptime measured in as many decimal places as I want zeros in my salary, the edge is different. As a cloud is modern day, then edge computing is the wild west. The edge is a mixture of heterogeneous devices that have different compute capabilities, connectivity profiles, and might or might not have peripherals and sensors attached to them. They might have special compute accelerators on them, or they might be devices from computing's past. So the question becomes, what should and shouldn't run Kubernetes? To tackle this question, we came up with two guiding principles on which devices should run Kubernetes. First, we think that devices that should run Kubernetes are the ones that are used for general purpose computing. For example, an unused on-prem server would be great device for Kubernetes, but something like an IP camera that's running a small arm trip, maybe not so much. Our second belief is that the devices should also be able to support Kubernetes. This one's a little bit more nuanced, but the key here is that you can see support as two different things. The first is physical limitations of the devices. On-prem servers, you can probably run Kubernetes on it. Industrial PCs, why not? When we get closer and closer to smaller devices, then something kind of like Murphy's law kicks in, where even if you get Kubernetes on the device itself, there's really so little resources left that it basically becomes kind of useless. The second thing is legacy or requirement constraints. You might have a device that has a lot of compute on it, but it's practically and technically impossible to install Kubernetes on because it's legacy hardware and there's some sort of technical restrictions that really not let you do it. So this covers all the devices that can run Kubernetes, but our focus here are on the small or brownfield devices that don't meet these requirements. We think that there's immense value in accessing and exposing these non-Kubernetes devices to a Kubernetes cluster. And this is where I would like to introduce you to Aukri. Aukri is a Kubernetes native project that exposes your edge devices as native resources to a cluster without having to install Kubernetes on it. All you have to do is provide a configuration that states what devices you want to find and what communication pattern it supports, and Aukri will handle the discovering of all these devices and expose them as Kubernetes resources to your cluster. Communication patterns here can be any way that the device can communicate with the cluster, but the most common example of this are protocols such as onViv or OPC UA, but it can also be other things like UDEP or USB discovery or your own proprietary goodness. All the communication logic just needs to be wrapped up in a pod and Aukri will handle the rest for you. If you have a specific workload that you want to run with a specific device or a classic devices, you can also specify this in the configuration and it will automatically get deployed when the device comes online. So I'll give an example of where Aukri might be useful. An example of this might be a smart store scenario. You being the modern day store owner, design a store with weight sensors that can track inventory of all the items. You have a pod that can figure out how much inventory you have based on weight. If you were trying to set this up manually, you would have to independently and individually configure all of these sensors yourself. With Aukri, all you need to do is specify the protocol that the sensor runs and the magic inventorying pod that you have, and all your sensors will be discovered on online automatically. In general, if we go back to the principles, if you have any device that you don't want or can't run Kubernetes on, but want to include in your overall solution, then Aukri is the project for you. So this unfortunately brings us to the end of our lightning talk. But no threats, you can find out more at aka.ms slash Aukri and swing by the Aukri channel on the Kubernetes Slack and come chat with us. We also have an Aukri deep dive later on today, so be sure to check that out too. Thank you for joining me today and I hope you have a great Kubernetes on the edge day.