 All right, welcome everyone to our session about securely interacting with edge devices in Kubernetes. So my name is Eugen, I'm a product manager at Microsoft and my fellow dev Aditya could not be here today in person but without further ado, let's get into it. So I'll just kind of go over the problem of IoT devices at the edge and then I'll briefly explain what is Ocree and how does it work and then the problem with secret management at the edge. Then I'll talk about our solution proposal and I'll show a quick demo for a proof of concept and then a couple more considerations with our solution. So as you may know, at the edge there's a heterogeneous ecosystem of devices. They all have different amount of computes so you may have servers, PCs as well as IoT devices that can act as controllers, sensors and more. And this IoT environment is constantly scaling up or down with devices and they often depend on network availability as well. So since these IoT devices are too small, too old and too locked down to run Kubernetes on themselves today how can we manage and coordinate these from a cluster? So the solution for this is Ocree which stands for a Kubernetes resource interface and Ocree is currently a sandbox project and it makes connections to IoT leaf devices via their protocols. So currently Ocree supports common protocols like OPC UA, Envif and UDEP but users can also use our template for writing custom protocol handlers as well. So new devices are detected automatically which makes scaling really easy and devices that are taken off or down due to network availability will also automatically disappear. So these devices will be exposed as Kubernetes resources on your cluster just like memory or CPUs and the Ocree brokers will allow you to use the signal from these devices in your applications and workloads can be assigned to specific devices or groups of devices even if they're attached to other nodes. So this means you can get direct signal by running on the node closest to the device which eliminates latency and if you have multiple devices connected to the network all the clusters and nodes can see the Ocree devices. And also if a node or cluster goes down then the others can continue to pick up the work for you. And as for developers Ocree makes it easy for you to deliver containerized workloads meant for IoT devices with ease. So you don't need to code for each and every specific camera. You can just write a more generic code for each type of device or groups of devices. So here's a brief overview of how Ocree works. There's five main components that you should know. So the first is the Ocree configuration and this is a CRD which tells Ocree what kind of device to look for and you can also tell it to deploy a broker for those devices. Then you have the discovery handler which uses its protocols to find the devices. And then it will inform the agent which runs on all the worker nodes and the agent connects to the Kubelet according to the Kubernetes device plugin framework to expose availability changes to the Kubernetes scheduler. And then the Ocree instance is an instance CRD created by the agent to track the availability and usage of the device. And then the controller which runs on the master node in the cluster deploys the brokers for the devices to connect to them and utilize them. And it can also handle any node of disappearances to modify the relevant instances. So this is kind of like the workflow of how Ocree works. So the cluster operator first applies the Ocree configuration. Then Ocree C which is the config CRD is created and detected by the agent. Then the discovery handler that is specified in the configuration is deployed. And it will go and find those devices and then tell the agent of the discovered devices. Then the agent creates the instance CRD which is Ocree I for each discovered device. And the Ocree controller detects the changes in the instances and schedules broker pods for those devices. Then the broker is allocated and the instance is updated with the reserved slot. And the broker pod begins to run and it'll establish connection with the device. So there are several challenges with credential management at the edge right now. Currently passing credentials into Ocree would be plain text which that might not be as secure as attackers can just easily spoof that. Secrets might also get changed and we need to be able to monitor the updates and pass in the updated credentials. And devices also have heterogeneous requirements for authentication and storing secrets. So for example, OPC UA uses certificates whereas on this may use connection strings like URLs. So our proposal is to have the Ocree agent retrieve the secrets or other data and pass it to the discovery handler as a part of its discovery request. And we wanted to have a native Kubernetes experience and we plan to support both secrets and config maps to be able to enable the credential management. So on the right you can see our example configuration YAML and we have added a new discovery field called discovery properties which can refer to config map data secrets or plain text. So at the top you see like our normal configuration YAML with the spec and the discovery handler and in the detail section you can do filtering like excluding certain IP or MAC addresses, et cetera. And then in the discovery property section where we'll be passing the secret data it's in the form of a key value pair list. And so these get initialized as a part of the configuration by the user. And these properties can apply to a single device or a group of devices so that we can provision all the cameras in a particular area the same way. And then there's also an optional parameter and this is defaulted to false. So if the key doesn't exist in the secret or config map then the configuration deployment will fail. But if it's set to true and the key doesn't exist then the agent will just not add the entry to the list pass to the discovery handler and the deployment will still succeed. So that's up to the user. So now let's look at our new workflow. I've highlighted the changes here. So before everything it's up to the cluster operator to set up the secrets and make sure that they're all properly provisioned. Then the cluster operator will apply the configuration and when the accuracy is created the agent will pull down or update the necessary secrets since it has the API server access to do that. And then it will start and the discovery handler is deployed and then the discovery handler gets the credentials from the agent and verifies that they can connect. So the rest of the workflow is the same and as of now the agent can't monitor changes in the secret data after the configuration is deployed. So in the case that it is changed currently the operator would need to manually redeploy the configuration YAML. So I'll be showing like a quick proof of concept demo of how this all works. So let's look at the configuration YAML. So in the first half we have the secrets. We have the demo auth secret which is a Kubernetes secret and we used string data to pass in the username and password. So we'll be using an on this device today and for OPC UA you can pass inserts and keys for example. Then you can use the endpoint reference for the on this camera to get the device UID for the username and password lookup. And then on the right we have the actual auger configuration YAML where we specify the protocol for the discovery handler. So again we're using on this and then you can filter out any cameras you don't wanna discover through the IP address, MAC address or URL strings. Then in the discovery properties we specify the username and password keys. Then we're just specifying that at the end we want our broker container to be deployed for the devices that are discovered. So first I can verify that my camera is working properly by using the on this device manager on my host. So at first you can see that we can't watch the live stream because it has authentication but when I log in using my credentials which is admin admin in this case I can log in and see the live stream properly. And then I have a K3's cluster running on my device here. Now usually we can just deploy auger really easily through home charts but for this demo I'm running it locally so each terminal is running the agent discovery handler and controller. And then I've already applied the configuration and secrets YAML so now if I do kube control get auger C you can see the configuration CRD and the capacity which I set to five cameras. Then you can do kube control get auger I and you can see that the instance has been created by the agent for my discovered device. And now when I do get secrets you can see my demo off secret and here I'm just inserting a screenshot of the description and you can see the key pairs. And then from there if I look at the pods you can see that my broker pod is running for that device. So let's look at the logs of the broker. Here we can see that it's properly receiving the frames from my camera and it's accessing the RTSP stream and for the demo we've logged the source URL of the stream which it's accessing with the username and password as you can see in the URL. And then from here you could have an app something like a web application for example running on your cluster and then you can have that connect to the broker so that you can use the video stream in your application. So there are a couple more considerations for the solution that I wanna talk about. So the first thing is obviously you want to use the best practices for Kubernetes secrets. Augury accesses the secrets objects via the Kubernetes resource API and it relies on the cluster owner to encrypt secret objects and arrange access permission properly to ensure that secret objects are secured and to reduce the risk of accidental exposure. And then depending on the cluster configuration you may also store sensitive data in the cloud and you may need to pull the secrets from a cloud backed key management service like HashiCorpVault, AWS, Azure Key Vault or whatever it may be. And in this case you can actually use the Kubernetes secret store CSI driver and this allows you to mount the existing key management service backed secrets or certs as a native Kubernetes secrets and then you can use them in our discovery properties just like before. And this allows our solution to be vendor agnostic. Some other things in our roadmap for secret management is improving the agent so that it can continually monitor the secret data. So anytime the data source is changed it will reissue the discovery request anytime. And then if the mandatory properties are ever deleted from the data source then Awkward Agents should be able to monitor for that as well and revoke the discovery request. We're also looking at how we can organize our secrets a little better within this discovery properties section. So currently in this demo I just showed us mapping the on-vif credentials directly to each device ID to the credential using the key names in the format of device ID appended to either credential field category so device ID, username or password. And eventually we might want to have a list that contains an array of objects where the device ID is referred to the username and password keys which point to the actual Kubernetes secret information or you can use certs and keys as well. Eventually we also want to modify the discovery handler and broker to have several stages of authentication and conditional on what we pass in. So this would allow us to have separate sets of credentials each with different access rules. So for example, a sensor read credential can only read sensor data and a separate set of credentials would be needed to update firmware. So this has many more stages of security. And this is just one of many ways to do this. We showed them more Kubernetes native fashion here but there's a plethora of CNCF projects that could simplify this. So for example, Dapper has a secret management feature and they leverage this a lot. They leverage a lot of the same native components and special interest group work. So one could try to spin up a separate pod from Dapper that manages secrets and then passes them into either the discovery handler or the agent. And in general, we wanted to keep the solution as open as possible so that individual operators can customize this as needed. So you can read more about our secret management proposal in the docs PR, which is linked here. You can also take a look at the open PR in our GitHub for implementation and we're always open to feedback and review so feel free to take a look and comment on the PR. We're planning to have a new release this week so keep an eye out for that. It will include lots of bug fixes and improvements to the Udev Discovery Handler and et cetera. You can also join our Slack channel to ask questions and stay up to date with Augury and our community meetings take place on the first Tuesday of every month at this time. And with that, I'll open it up to questions. If anybody's got a question, let me bring in Mike. Hi, would it be possible to also integrate something like industry cameras, which are usually used as USB cams? Sorry, what kind of cameras? Industry cameras. USB cameras? Yeah. Yes, yes. So I think, yeah, you can use the similar, like our discovery properties to pass in the authentication and then from there you can just use our Udev Discovery Handler to connect to the USB camera. Well, okay, thank you, that was a great talk. All right, thank you everyone. Thank you.