 Hi, this session is presented by the Kubernetes IoT Edge working group. This is about using Kubernetes for edge applications. I'm Steve Wong, a tech lead of the working group. I work on Kubernetes and a few other open source projects as an employee of VMware. Hey, my name is Dan. I'm an engineer at ThreadHits. I've been doing a lot of IoT and edge computing in the last couple of years and also a lead of this working group. We'll give contact information and a link at the end, but here's the agenda. We're going to start with a quick overview of what the exact definition is of edge applications for the purposes of this talk. Then we're going to move on to an introduction to some techniques and open source tools that are really useful when using an event driven architecture at Edge. Hopefully you'll like this talk and if you do, we'll wrap up with details on how you can become a member of the Kubernetes IoT Edge working group where we host ongoing discussions on subjects like this talk. For the scope of this talk, when we say edge application, we're not confining ourselves to just the software that runs on individual edge devices. Sure, there's software there, but if you're interested in getting Kubernetes involved, we're going to assume that you've got multiple devices involved but are also using containerized software at some location. Perhaps multiple locations that could be a high level gateway, regional tiers or even a global tier. Our definition of an edge application is the big picture where you've got software interacting from various locations simultaneously. So if this is what you're setting out to deploy and manage, you have a need for data and control plane communication from edge devices up to higher level tiers, and you might also want support for edge node to edge node interaction as well. A lot of people have fallen in love with Kubernetes lately and it is a great tool for orchestrating containerized apps, but it's also extensible as a control plane. The fact is that when it comes to some edge use cases, the devices simply have too little resource to run software in a container or as a Kubernetes worker node. Yes, if you've got pies with four gig or now eight gig of memory, these are quite capable and you could turn these into Kubernetes worker nodes. But there are a lot of far tinier devices like Arduinos, where memory is measured in low single digits of megabytes. We're going to talk about a technique that can invite the little devices to the party while still supporting containerized software running in your higher level tiers simultaneously. So in a simple form, an event driven architecture looks like this, an event is a piece of information often used to communicate facts like measurements or commands like statements of intent. Functions and services consume events and it's up to these applications to decide what to do with the event. Events can be ignored, forwarded, stored or transformed. In a time series of events, whether the events are measurements or commands, a newer one might make an older one irrelevant if a buildup occurs somewhere in a queue. Analysis of events might emit new events based on a transformation. In a distributed system, an event stream is a communication fabric, and when the apps and events flows are viewed together, this is actually a programming model. Event driven architecture can be helpful because it encourages some good program development practices, loose coupling, independently maintainable components and separation of concerns. A distribution layer can help organize things and maybe even offload things you might otherwise have to write yourself. An example here is a pubs up broker eliminating the maintenance of a distribution list from the duty of an event publisher. This is a metaphor. This is how a restaurant often works in the real physical world. There are various people involved shouting out combinations of desired state and notifications of state changes. You listen to what concerns you, you ignore the rest. So in this example, I've got a customer who shows up asking for a table for one for lunch, eventually getting assigned to a table making an order and the waiter passing along details on that order into the kitchen. Let's take a look at an alternative flow of what a restaurant would look like with a microservice implementation. Yes, I think you could get it to work, but at what long term cost. What if the menu or table layout changes what things have to get touched. Yeah, understand that many edge locations devices often have various uncoordinated life cycles and business operations might change maybe even seasonally. The loose coupled nature of event driven architecture with independently maintainable components and separation of concerns can really be attractive edge. By the way, I want to shout out to Simon Aubury of ThoughtWorks for coming up with this nice restaurant metaphor for describing how event driven might work at edge. So, event driven can, can originate on devices below what I'm calling the Kubernetes waterline. There are some very simple protocols like MQTT that can be implemented on things like Arduinos, particularly if you can afford to skip TLS. I know skipping TLS is risky sometimes, but maybe you already live with unencrypted traffic operating on local host with you know within an individual system. So this concept is nothing new by skipping encryption there is risk, but if you're connecting isolated devices that are not connected to the internet with some semblance of physical security, maybe this is an affordable trade off. The bottom line here is that there are solutions for very low end devices, and if you have larger devices there are solutions with bigger feature sets. At Gateway and higher tiers you're likely to have plugins to support all the various open source tooling available for eventing, and you can probably afford to turn on full security. Here are a few words of advice that I've discovered out there with regard to designing an event driven architecture for edge. View your, when, when you persist events, view them as a persistence of a replayable stream history. You don't want the event consumers tied to specific producers. View your events as a record of something that has happened and so it can't be changed you can't change history. Messages on common delivery platforms often have certain characteristics, and there are certain things you should do it shouldn't do. It can be different across latency boundaries, so that if you've got a failure domain and a latency domain. You can do things that maybe you should avoid when you're crossing those boundaries synchronous might be okay within a boundary but really that's an anti pattern. When you're crossing a big latency castle. Thank you, Steve. So, one of the answers to to serverless archies to event driven architectures in in the Kubernetes land is is K native eventing specifically. K native eventing is all about. Yeah, hooking up our K native services with appropriate event sources. So, this diagram basically explains it in in a, you know, in a most simple way. So basically we have a source of our events which have a sink and that sink goes to to to appropriate service. Also, one more important detail here is our cloud events and and the cloud events provides a structure to our events so basically adding all the metadata that we need to describe our events and and to have some consistency. Within our serverless applications. And from that consistency comes accessibility, because then we can create an API seen in a lot of different different languages and, and, you know, port our our solutions and and our services or our functions to different different environments. So here we can see, see one of the example where where a cloud event basically adds, adds some metadata and the data to the, to the picture. So, how this looks in practice is something that that will demo at the end, at the end of this talk. So basically, what we have here is an edge location which run in Steve's home, which contains apache and QTT broker and the small devices that they're connected to that broker and sending sending their telemetry using using QTT. So what we will do, we will use a camel K. And then connect to that that broker basically getting all these empty team messages, converting them into the cloud events and and and pushing them to our sink and in this demo, the sink will be simple event display service which will log these cloud events to the console. And the camel K originates from the Apache camel project project which provides a very rich framework for for doing enterprise integrations. And the camel K is basically a K native or adaptation of all the camel components and as we can see here, we can use any of the available hundreds of camel connectors to connect to different external systems and convert them automatically to to the K native event, the eventing sources. But if you go to the next slide, we can see how the K native K native architecture evolves, because, you know, in the hooking one source to one service is easy enough but but but not enough to support all the use cases. That's where we can bring in the concept of channels and and with with sending an event from the source to the channel we can now have multiple multiple services subscribing to the channel and receiving all these events. Channels can be backed by different source, different persistence. Techniques so in memory channels, we have the traditionally often use Kafka channels backed by the, the Kafka broker, which provides a really, really good solution. For, for implementing IOT, MQTT, the solutions on serverless extending this concept even further on the next slide we can see the concept of K native eventing brokers, which basically function in a similar way as as, as, as a channels. The only thing is that instead of the subscriptions. Now we are defining different triggers for the broker that will that will push events to different, different services. And the only difference is that for the triggers we can we can add a different kind of filters so that that we can say that that we are interested in only a certain type of cloud events while we're doing while we're doing this. And finally, if you take all this into the consideration on the next slide we can see a little bit evolved scenario where we can with with this kind of architecture we can support multiple things. So, we can have a, we can, we can support scenario that that we will demo soon where we have a small, small embedded devices connecting over non TLS to the local pacho broker on the edge side, then having, having a camel MQTT converting, subscribing to the broker converting those to the, to the, to the services and to the cloud events and sending it to the channel. But we can also imagine that we can provide a new components, naming MQTT broker source that would act to the external systems as an MQTT broker and where we can allow more powerful devices, which can support TLS to connect directly to the cloud, to this, to this source, which will also transform these MQTT messages coming from the devices into appropriate cloud events and sending them to the channel. So one could be backed by Kafka, providing all the, the, the persistence and reliability that we would need in such a solution and then push these to different, different services. So, even this place service which is like most basic one that we can imagine that can be used just just for debugging purposes. And most commonly, you will push this to some kind of influx DB or Prometheus backed by a Grafana dashboard to have more better observability and be able to create different dashboards where you can, where you can see, see all this data. So this is a call for action on the next slide and something that we can, we can try to solve in the, in the, in the working group and trying to make MQTT, which is the default IoT messaging protocol of first class citizen in the K-native eventing world. This is, you know, all the examples shown here is just a tip of the iceberg, iceberg, showing how things could work but there's a lot of things that, that usually in this kind of systems needs to be solved like device security and sending commands back to devices. We can provide a lot of integration with the existing IoT platforms like Eclipse Hono or AWS or Azure offerings in this area, provide an easy way to run all these using different CLIs and UIs and provide some of these services out of the box that, you know, people can get really, really quickly get started with IoT on a platform like this. And finally, do something and extending a solution like this to the multi-cloud or let's say edge, edge nodes environments using something like, like Scupper. And if you're interested in that topic, I would suggest you to take a look at the recording or the video of the, of another session by my colleague Ted, which is happening at the same time as this one, which explains a little bit more of how serverless workloads can be pushed from the edge to the cloud, to the different clouds and different serverless deployments. So for the end, I would like to do, to go back and do a simple demo of all this. So let me just quickly share my screen and try to try to do that. Steve, I'm not allowed to. So here we are going, going back to the, to the original idea, having one source and one sync and connecting them via cloud events. As I said, our service is a very simple, I think the simplest possible K&A to service that we could have. It's named camel event display, and it uses the event display image. So what will, what this image will do, it will just receive events on its serving endpoint and log them to the console. And our source will be, let me do this like this. So it's visible better. So we're using camel source. K&A eventing come with a lot of out-of-shelf sources and camel source is one of them. And as explained earlier, once you have a camel source, you can use different camel components to connect to different external systems. So right here we are using the pacho component, which will connect using the pacho MQTT client to a broker. This is just a template because we don't want to give Steve's MQTT broker details to the public, but it will connect to the Steve's MQTT broker running at his home using appropriate username and password and connect to one MQTT topic. In this case, topic is aw-data-temperature1f, assuming it's a temperature sensor. And in this camel source we can see that this is a definition of our source. In the down part here, we see how we can do a definition of the sync. And for the sync, we can see that we are directly calling the service and the service will be the aforementioned camel event display service. So to not disturb demo gods, I have all this running in advance. And what we can see in this other window is that we can pick up now the logs coming from our from the pod serving this event. And as we can see, these events are MQTT messages are now changed to appropriate cloud events. We can see some of the metadata, the headers of the cloud events, like type, which means that it's generated by the camel component, from which source it's coming, the time step of the event, and then finally the data. So the data now is 74.12 Fahrenheit degrees, I assume. And if you go back to the definition of our source, what else you can see here is that I have commented out a different sync. So instead of going directly to the service, we could go to the in-memory channel or Kafka channel or some kind of broker defined by the KNAT infrastructure and implement all these other architectures that we saw on the slides before. So this is an example of another service. So basically reading the events coming from the Steve's MQTT broker, pushing them to the InfoxDB, and then having a Grafano dashboard connected to that InfoxDB and showing this temperature in the real time. So if you find content like this useful, we want to invite you to become a member of the Kubernetes IoT Edge Working Group. We're not really writing code on Kubernetes, but we are focused on applying Kubernetes with open source tools to Edge and IoT use cases. We have online Zoom meetings every two weeks at alternating times to accommodate members in different time zones as shown here. There's one series earmarked for North America, the other one for Eastern Europe and China. We encourage a member-driven agenda. So once you join, you can nominate topics for presentations or discussions. We're also operating a group channel on the Kubernetes Slack. So we can be contacted using our GitHub IDs, that same ID as my Twitter handle. Okay, so these are our GitHub handles and you can use that to reach out to us or we're also available on the Kubernetes Slack. You see here on this slide the link in the sked site to get a copy of this presentation deck. Yeah, thanks for joining us. And we're going to hang around for a few minutes for Q&A and at this point I'm going to turn it back over to the CNCF administrative staff.