 Hi, my name is Halloween. You can call me Halloween. I'm a standard engineer from Huawei. I'm currently working on some open source work about edge computing and participating and contributing to the related community like Kubernetes, QBH, ArcGreno and OpenStack more. So the agenda today, we're going to get into a survey of some solutions that are available now that relate to Edge IoT and Kubernetes. Because this is a survey and we're going to cover many things that we found, it won't be a deep dive into any particular one of them, but these slides will have links so that you can get detailed background on each of these things. And the links that are there are curated where I took the time to look for good blog articles. There's always a link to the main project. Everything here by the rules of the CNCF is open source and it relates in some way to supporting Edge and IoT applications using Kubernetes. Finally, after this at the very end, I'm going to describe this IoT and Edge working group and how you can get involved. If you like the content of this session, we have meetings every two weeks, one at a time appropriate for Asia and Eastern Europe and another one more comfortable for people in a United States time zone, but we'll get into that at the end. So a standard Kubernetes architecture was really architected to support a public cloud environment. So it put in place this control plane that manages one or more nodes that actually run the containerized workloads. And it really wasn't originally designed to support some of these things that you find with IoT and Edge, meaning that if you tried to do this central cloud hosted control plane, but you've got 100 Edge nodes or 1000 Edge nodes or 10,000, it could cover that many nodes in one public data center, but it would expect to have pretty low latency network communications to those given that they'd all be in the big data center. And the network communications would generally be expected to be quite reliable, maybe with dual network redundancy, whereas in the real world with IoT Edge, things aren't always like that. So what we're going to cover is some of the aspects that are unique about IoT and Edge that disrupt the original design of Kubernetes and might call for some alterations or compromises or add-ons to get it to work. So one option you have when you land a Kubernetes based infrastructure at Edge is to simply put a whole cluster at every Edge location, both a control plane along with one or more worker nodes. Now, that might work for some people if they've got enough infrastructure to support this. And I'm talking, you know, this might be running whole clusters with a control plane at each location might entail a fair amount of hardware, as well as physical space, money to buy this. And you're faced if you're running 10,000 of these with dealing with 10,000 control planes, potentially, unless you have a mechanism to federate these. Option two is deploying a central control plane managing Edge locations that central control plane could be in a public cloud or it could be in your own on-prem data center. But the idea is that you can you would install a Kubernetes control plane having it connect to these Edge nodes and manage resources there. There are various ways to do this. The central control plane doesn't actually define enough of the aspects here. What I'm getting at is, do you want the central control plane with literally Kubernetes nodes at the Edge locations? Or is it possible to use the Kubernetes control plane with modifications to control things that aren't actually Kubernetes located at the Edge? And we'll get into this, but there are actually variants out there in the open source world that do it both ways. Potentially doing things like using a Kubernetes control plane to control Edge nodes that simply have a Docker runtime and they aren't actually a Kubernetes node, or maybe not even having a Docker runtime and simply being Edge devices that are running Docker at all, yet enhancing the Kubernetes control plane to enable it to manage these things in a Kubernetes like way. The third option is something called virtual cubelets that could manage what this is, is normally a Kubernetes node, the one that runs the workload, has something called a cubelet. But it's possible to essentially fork the cubelet design so that the Kubernetes control plane thinks it's talking to a cubelet, but it's really talking to something acting as a proxy that would take things like pod specs, manipulate them and send them out to the leaf edge nodes, perhaps in a different form, that would take the Kubernetes paradigm of making a declarative statement of what you want, the desired state, and causing it to happen. Maybe even saving these packets of desired states so that if your network connection is intermittent and you get disconnected, they would continue on attempting to achieve the last known desired state and engage in resolution as the connection gets re-established or is good. We'll move on to some variants of these different forms of using Kubernetes for IoT and Edge. Some of the differentiating factors, I'm going to cover several, but some of them are pure Kubernetes, meaning pure what they call upstream, no modifications. Some of them have forked components of Kubernetes where some parts of Kubernetes are standard and others have been modified in some ways. Some add specialized network support, which is often very desirable when you're doing IoT and Edge where you want to use non-HDV protocols like MQTT and have that managed, maybe even manage complex network issues like edge nodes connecting to other edge nodes without doing hairpins where the traffic goes up to the public cloud and comes back down, which would be undesirable. There are often added security features needed when you go to Edge because you've got issues like perhaps lack of physical security where someone could take an unattended Edge node, steal it, and if your security is based on certificate files installed there, you have to worry about that potentially being a vector back to somebody taking a stolen device and getting access to your network all the way back to your control plane in the public cloud. Some add management capability for the hardware because if the goal is managing Edge without skilled administrators being there, you might want to do more than just manage containerized workloads. You might want to keep track of hardware health, firmware on these hardware devices, and some of these things we'll describe are potentially prepared to do this. Another common feature of this is dealing with the reduced resources commonly available at Edge. We'll go into Form 1 in the previous slide, what I'd call whole clusters with control plane at Edge. There's a couple links here to use case examples. These are retailers based in the US, but one is Target Stores which is a retail chain common in North America. They wrote a blog article describing their use of Kubernetes where they're running Kubernetes in retail stores. The second one is Chick-fil-A which is a fast food chain selling chicken sandwiches. They gave a presentation at KubeCon North America in December that was very interesting where they described three physical node tiny Kubernetes running in these fast food establishments. I can't go into detail on these but the link is very interesting. In that Chick-fil-A case, the presentation got recorded as part of KubeCon North America. Go take a look if that interests you. The first solution I'm going to cover is Rancher K3S. He's not going to speak but we've got a representative from Rancher here if anyone has questions at the end. Rancher K3S is an Edge optimized micro distribution of Kubernetes. What they did was take Kubernetes and if you see the black portion, they remove some things that aren't going to typically be necessary. For example, Kubernetes in its current form has multiple cloud providers compiled into the code. Kubernetes supports running an Amazon cloud, running in Google cloud. If you're running an edge location, you're clearly not going to be on Amazon cloud or Google cloud so that code is just unnecessary and if you remove it, you'll make it smaller. Likewise, it includes a whole bunch of storage drivers that it's very unlikely that you're going to need all of those. It adds features for simplified installation, TLS management, automatic manifest and held chart handling. There's a couple of swaps. So, SED was deemed to be resource intensive. K3S swaps it out by default for SQLite. I believe it's an option to go back to SED if you want to and you're prepared to give it the resources. They made some decisions for you. Standard Kubernetes supports any number of container runtimes as plug-ins, but I think under the guise of making it easier to deploy and manage, Rancher K3S by default swaps out Docker for container D and it's opinionated by default on your CNI where it chooses flannel. For the DNS, it chooses core DNS and for the ingress solution that normally you'd make that yourself but it bundles in traffic as an ingress solution. Once again, that simplifies things if there's one standardized thing to build in. By the way, it supports nodes with memory less than 4GIG, which is a big deal if you have a limited budget, limited space. Clearly, it's not expecting rack mount servers. Resources, if you want to learn more about Rancher K3S, the landing page is good. There's a couple of blogs. They're very recent. Rancher right now I think is considered an alpha or a beta and still moving to a declaration of a stable release and that YouTube video is actually very good at giving you an overview of it. Now we're going to move on to a different form of Kubernetes managing edge where you've got a central Kubernetes control plane in a cloud or in a big data center managing things out at edge. A use case example here is the German company Bosch that has a very good blog on using Kubernetes to support things in this mode. At this point, I'm going to turn it over to maybe I will list some open source, list some projects related to this architecture. Firstly, let's see the edge IoT edge. What does it do? It moves cloud analytics and customer applications to devices. So that the application developer can focus on the business logic instead of data management. In that case, you can pack your application into a standard container and then deploy those containers to any of your devices and monitoring them from the cloud. There are two solutions provided by edge IoT, edge to realize that one is, one is what you could let when users need to deploy application on their own private edge devices, they will use control command to send a pod specification to their central cloud. For example, AWS Fargate and then what you could let can talk with Azure IoT Hub remotely to translate the pod specification to the IoT edge deployment and submit it to IoT Hub. In that case, IoT Hub can orchestrate applications to run on the specific devices. Here, here on prime devices must have IoT runtime and Docker own long application containers. So you can see the devices are not controlled by Kubernetes load balancer. So if you want to know more about details about what you could let, you can visit this resource link. Another is edge IoT edge about our own Kubernetes. Likewise, when users need to deploy application on their devices, this project will pull down the deployment from IoT Hub. Do you mind if I step in and explain this a little? So the first slide was the original form of Azure IoT Edge that did not run Kubernetes at the edge locations. But this new one, the preview, which just came out a few months ago, is a different form that actually runs Kubernetes at the edge locations. The original one is still there, so you have a choice. And this is just the second version. This project will pull down the deployment from IoT Hub and translate it into Kubernetes primitives. And then the Kubernetes clusters at edge will orchestrate it to run on the specific devices. Here there is no hard requirement for Docker. You can just use Kubernetes instead, and this solution enables users to deploy Azure IoT Edge workloads to Kubernetes cluster on your own primitives without need a virtual Kubernetes. So obviously, the devices are controlled by Kubernetes cluster at edge. The more details about how to run Azure IoT Edge on Kubernetes, you can visit this link. Kubernetes, which is the open source project I'm currently working on, now it is a CNC sandbox project. Kubernetes can offer two kinds of reference architectures. Not only can it remotely control Edge, but also it can deploy the clusters at the edge. For central Kubernetes remote control Edge, Kubernetes extends cloud-native containerized application orchestration capabilities to host at edge. It means users can orchestrate applications and manage your device and manage your Edge node just like a traditional Kubernetes cluster in the cloud. Kubernetes has the cloud part and the edge part. The cloud part has the cloud-hub component and the edge controller and the devices controller. The edge part has a bunch of customer components and lightweight equivalent. It supports edge-side autonomy and when the network is done, it also supports multi-device protocols and it only needs to Mb memory to run at edge node. For deploying for lightweight cluster at edge, it's a new feature called edge-side in the latest release of Kubernetes version 1.0. Edge-side uses circulate or lightweight ATCD and moves the cloud part down to the edge, which makes the controller and the working nodes can work at the same edge, so they don't need the original cloud-hub and edge-hub components. If you want to know more details about edge-side, you will go to listen to my colleague Kelvin's session about Kubernetes this afternoon. If you are interested in Kubernetes, you can access these links and you can scan the QR codes to focus on our WeChat public account and join our technical exchange group. By the way, the Kubernetes team has developed a demo for this KubeCon summit. You can go to Huawei's booth to learn more about it. Basically, that's what I have. Now we're going to shift what we've covered so far is variants of Kubernetes itself, either pure upstream Kubernetes or specialized forms of it, but I'm going to move on to tools and applications that run with Kubernetes, either on it or alongside it, that are useful in the IoT and Edge space. I'm not going to contend I found every one of these, but for example, I intentionally left off commercial products in order to be on, in my slide deck, these are all open source projects. This comes under the charter of this IoT and Edge working group because we, at our meetings, we discuss not just Kubernetes itself, but applications and tools that would run with or on or alongside Kubernetes. The first one is a project called EnMOS, and this is managed self-service messaging on Kubernetes built-in authentication and authorization. This manages messaging tools that you might typically need to support inter-node communication when you're landing at Edge with large fanouts. It runs on Kubernetes. This manages the communication so the management tier that effectively it's like an orchestration for your messaging can run either in a public cloud or in a non-prem cloud. It manages different patterns like request response, PubSub events. Eclipse Hano provides a remote service interface for managing large numbers of IoT devices, implementing a back-end and interacting with them in a uniform way, regardless of the device communication protocol. So you might even have devices with a proprietary protocol or something that used MQTT. Hano is not the messaging protocol itself. It simply puts in place a means to put an interface there that you can manage these things in huge numbers. It's designed to support data ingestion, which is telemetry data, as well as command and control, control plane kinds of things going out to Edge devices. And it has features for supporting provisioning security. This control plane can be deployed on Kubernetes itself if you're running Kubernetes in a public cloud. And the links are here to learn more, including a video that goes into a full hour's worth of what Eclipse Hano is about more than I could cover here. Eclipse Ditto is an open-source project intended to implement the concept of digital twins. When I've got things like IoT devices out at Edge that aren't always able to connect back to the cloud, this concept of a digital twin allows you to have an illusion of that device up in the public cloud that will give you the last known state of that device. Maybe if it has sensors, you could get to the last known sensor reading. You could send to the twin the commands of a desired state. And if a connection is established at that moment, it will get relayed down to the physical device at Edge. But if it isn't, it will be there on standby waiting for that connection to be re-established. And that flow from twin to twin is bidirectional. So it's intended to deal with scenarios where you have intermittently connected Edge devices. So the links are there, as well as a video that gives a deep dive into what Eclipse Ditto is about. Eclipse Hawk Bin is a framework for rolling out software updates to Edge devices as well as controllers and gateways. Kubernetes itself publishes software in the form of Docker container images. But what do you do when you've got these Edge devices that have packages of things that aren't containerized software? There could be either executable binaries, perhaps firmware, whatever might be associated with this. The Eclipse Hawk Bin is a project intended to support that scalable to millions of devices and terabytes of software on a global scale. So if you have a need for something like this, you might want to take a look at this rather than build your own from scratch. And this can run on Kubernetes, can be deployed on Kubernetes at the control plane level via Helm chart. And once again, I've got links there, architectural diagram. And in the interest of time, we can't go into the deep dive. But I think that if you look at the landing page and the video that you can learn more about this. Eclipse IO fog. Now this one I just added this morning, this was just announced four hours ago. So I had to modify my deck this morning because I actually saw a tweet go out from somebody with the CNCF. So I haven't had the time to drill very deep into what this IO fog thing is about. It seems pretty important. It's designed to install on any device, even minimal devices that couldn't support a Docker runtime, meaning edge devices. And it makes the Kubernetes control plane for distribution edge aware, allowing you to manage these devices at scale. And I went and published two blogs about this as well as the landing page for the open source project. But beyond that, I'm not going to tell you more than I know other than that this was just announced today. And I'm trying to cover what's out there. There's a couple others that in the interest of time, I don't have full pages on. And they're more Kubernetes related. And these two are edgex foundry and a crano. They're open source projects that are related to edge. But the reason I don't devote a full page to these is I think in their common form, they're not necessarily super Kubernetes centric. But I found evidence that some people using these are using these alongside Kubernetes or managed by Kubernetes, even if Kubernetes isn't the mainstream way today to operate these things. So I thought I'd throw those links in there because I'm trying to be fairly complete here to do the research on what's out there so that you don't go reinventing wheels when somebody is already looking at open source approaches to problems out there related to IoT and edge. So thank you. That's the end of the presentation. This deck is available at that link. I'll apologize what you'll find at that link because that aisle fog just came out. It doesn't have that page, but I will update a fresh PDF on the SCED site, which is the conference site. So we'll get there eventually. Our group has regular group meetings to each month and we have split these. So one of the cycles of meetings is convenient for North American time zones or I guess North America, South America. The other has been designed for Asian Eastern Europe. So the times in the next meetings are indicated there. Just to give you an idea of what we had at the last one, we covered running a container image repository on an edge location, which is of interest if you're running dockerized workloads at your edge locations. And we've got a YouTube channel that has recorded some of these presentations that have come out in the past. There are also recorded meeting notes and agendas. You're welcome to join the group. You can't actually access the Zoom meetings or the meeting notes unless you join, but it's easy to join. It's a standard, it's a Google group. You give it an email and you're in. So join the group and you'll automatically get access to the documents. We also are operating a channel on Slack and the group is maintaining a white paper that in a multi-page written form is attempting to do something along the lines of the survey I just did in this presentation, although it's been several months since that white paper was updated. Some other things you might be interested in. And KubeCon Barcelona took place just a month ago and the presentation of the IoT and Edge Working Group got recorded and it's published on the SCED site for KubeCon Barcelona. There was some good, more deep dive coverage of security aspects of running at Edge, as well as Kube Edge and an open-source solution for running MQTT on Kubernetes. So that said, let's move on to Q&A. I'm going to go back and just leave the link to the deck, but if anyone's got any questions, Luya and I can attempt to answer and we have a rancher representative here in the front row too if you have any questions about K3S. Yes, wait till you get the microphone. Just a simple question. Is there a well-accepted definition of Edge? I'm not going to purport to be the guy person to make it. I'm afraid my own reaction is that I've probably seen two or three of them and I don't want to be the kingmaker. I think that for the purposes of our group, we're declaring that a lot of people think Edge might be sensor and control, but we're open to it being remote branch office, so scenarios like landing workloads, running in small retail stores, bank branch offices. In general, to me, Edge would encompass any, well, repeat the term, Edge location that doesn't have permanent IT people managing what goes on there so that you've got a lot of them and you don't have highly trained administrators there. You're trying to get things done without skilled staff being physically present. That to me is what Edge means. There's often going to be an aspect where you can't assume the communications to that location are always good because the realities of the world are that things break. That might be another thing that you can use to define what's Edge versus just different aisles of a big data center, which I don't think is Edge. Okay, our time is out, but we could probably spend a little bit of time out in the hallway if anybody has other questions or just wants to chat, but thank you for coming.