 Hi, this is Nadja Mustafa from Sri Lanka and I have Mr. Ishim Mohamed with me today to present today's topic, which we hope will be interesting for you as we'll be talking about a very interesting topic about edge computing and to be specific, it's Kubernetes at the edge and we're very happy to be a part of speakers at the Fosacea 2020. At the first glance, let me introduce the concept of edge computing before driving through the topic. The edge to be discussed here refers to the geographical distribution of computing resources closer to the device and further review from the resource nodes of the cloud data center. In the real world, edge computing cannot exist alone and it must be connected to any remote data centers or clouds and it resides on a distributed architecture. While the Czech IoT as an example, in addition to sensors that collect data and the surrounding environment, edge devices will also receive control instructions from the cloud. There are generally two modes for edge devices to connect to the cloud. One is direct connection and the other is connection through relay edge nodes as shown in the frigate. Whether edge devices can directly connect to the cloud or data center depends on whether there is only routable IP in the world such as mobile phones and tablets. However, most devices communicate with edge nodes via a near field communication protocol which in turn connects to the cloud or data center. The edge node is the convergence point of the device. There are two types of edge nodes namely infrastructure edge and the device or sense edge. Compared with device edge infrastructure edge has a stronger computing and storage capabilities and is usually connected to the data center through the backbone network. So it has greater network bandwidth and more reliable network connections such as CDN, game servers and so on. In addition to running containers at the edge of the infrastructure, some even have enough resources to run the full Kubernetes. And the next topic I want to introduce under the main topic is what is Kubernetes? Kubernetes is the container of the structure which is suitable for large applications with high availability. And while we talk about the advantages of Kubernetes, the first thing that we can talk about is the agile application creation and deployment and the ability of its continuous deployment and the portability of operating system and the cloud. And also we can say many more about the advantages of Kubernetes. And the next subject that we are going to talk about is why Kubernetes at the edge. Kubernetes has become a cloud native standard and is capable of providing a consistent cloud experience on any infrastructure. We can often see the combination of container and Kubernetes playing 10 times efficiency in DevOps. Recently more and more Kubernetes are running outside the data center edges. Therefore, if you want to deploy more complex applications at the edge, Kubernetes is an ideal choice. The features that support the reason for the best choice are the lightweight and portability of containers which is very suitable for edge computing scenarios. Kubernetes and Kubernetes has proven to be very scalable and also the ability to provide a consistent experience about the underlying infrastructure. The supportiveness of both cluster and standalone operations and maintenance modes. And the other one is the workload abstraction such as deployment and jobs and application rolling upgrade and rollbacks. And also a strong cloud native technology ecosystem has been formed around Kubernetes such as monitoring, logging, CI, storage, network can find many ready-made tool chains. And the other one is which is listed here is the support of heterogeneous hardware configuration and usage of familiar cubectile or Helm charts which means the users can use familiar cubectile or Helm charts to push IoT applications from the cloud to the edge. And also edge nodes can be directly mapped to the Kubernetes node resources and Kubernetes extended API can implement abstraction of edge devices. And that is about why Kubernetes at the edge and the other sub-topics about problems in deploying Kubernetes at the edge and how we are going to deploy Kubernetes at the edge would be contacted by my fellow mage Isha Mohammads. Thank you so much Naja for your great introduction about Kubernetes and edge and why Kubernetes can be used at the edge. So now let's have a look at like what are the problems that or the potential problems that we may face when we deploy Kubernetes at the edge. I mean Kubernetes as it is at the edge. As Naja said previously, deploying Kubernetes at the edge will be really beneficial but however Kubernetes is designed for cloud data centers. So to use Kubernetes capabilities at the edge, Kubernetes or its extensions need to solve some problems such as I mean ARM, ARM's low power consumption and multi characteristics makes ARM related CPUs widely used in the IoT or edge field. However, most Kubernetes distributions don't support ARM architecture and many device edge resources specifications are limited, especially the CPU processing power is weak. So full Kubernetes cannot be deployed. And also Kubernetes relies on the list or watch mechanism. It doesn't support offline operation and offline of an edge node is quite normal such as device sleep restart and so on. And Kubernetes operations and maintenance are still too complicated compared to the subset of functions used in the edge scenarios. And also special network protocols and topologies requirements that are there for the edge because like device access protocols often are not TCP or IP protocols and they are such as Modbus or PC UA for the industrial Internet of Things and Bluetooth, ZigBee for the consumer Internet of Things. So regarding how to use Kubernetes edge, a survey by the Kubernetes IoT or WG organization source that 30% of the users want to deploy a complete Kubernetes cluster at the edge while 30% of the users want to deploy Kubernetes management plan in the cloud and only at the edge node agent to deploy Kubernetes. So as we have seen the potential problems of deploying Kubernetes as it is in the edge. Let's see how, what are the solutions that are there to overcome these limitations. Kubeh is one of the solutions which is one of the first open intelligent edge platform based on Kubernetes extensions to provide cloud-side collaboration capabilities. It is also CNCF's first normal project, first formal project in the intelligent edge field. Kubeh focuses on the issues to be addressed like what you can see in the slide in the screen, cloud as collaboration, heterogeneous resources, large-scale deployment, light weight and low footprint and also consistent device management and access experience. So this is how Kubeh's architecture looks like and which is clearly divided into three layers, cloud edge and the device layer. So this is a complete open source edge cloud platform from cloud to edge device, eliminating the concerns of user vendors. Kubeh's open source, Kubeh's process consists of the following five components. It has an HD, which is a newly developed lightweight QBlit that implements the lifecycle management of a Kubernetes resource, objects such as port, volume, node, etc. Meta management is responsible for the persistence of metadata, which is the key to the autonomy of edge nodes. Edge Hub is a multiplex message channel that provides reliable and efficient cloud edge information synchronization and device twin is used to abstract physical devices and generate a mapping of device states in the cloud. And finally event bus subscribes to a device data from MQTT program. So Kubeh is a cloud process that includes two following components. One of them is cloud hub, which is deployed in the cloud and receives information that edge hub syncs to the cloud. And then the other one is the edge controller, which is deployed in the cloud to control the state synchronization of Kubernetes API server and nodes applications and configuration with the edge. Finally, Kubernetes master runs in the cloud, user can directly manage edge nodes, devices and applications in the cloud through Qubectl command line. The usage habits are exactly the same as the native Kubernetes and there's no need to re-adapt to a new technology. So this image or the architecture diagram I have taken from cloud edge.io. And cloud edge has the following capabilities. I think most of the capabilities that we have already mentioned, one of them is it supports the offline mode, which is a very critical or crucial functionality that is there in edge. In other words, this supports node cluster application and device management. And this can run on low resources such as home devices. And this is platform independent, which means it can run on any cloud, any OS. And also this does support Modbus and QTT and all other industry specific protocols. And also this is extensible. Let's have a look at the cloud edge deployment model. Cloud edge is a completely decentralized deployment model and the management plan is deployed in the cloud that we have already seen in the architecture diagram. And edge nodes can run Kubernetes agent without much resources from the Kubernetes perspective. And edge node plus cloud is a complete Kubernetes cluster. This deployment model can meet the deployment requirements of both device edge and infrastructure edge scenarios. So this is how Kubech is being deployed. So next let's have a look at the another option, which can be an alternative for Kubech, which also solves the problem of deploying Kubernetes at the cloud, which is known as K3S. So K3S is an official Kubernetes distribution certified by CNCF again. And when it comes to the open source time, it came a bit later than Kubech. And this has been designed specifically for the research and development operations and maintenance personnel who run Kubernetes on a resource constrained environment. The purpose of K3S is to run small Kubernetes clusters on the edge nodes of NA CPU such as 686, ARM64, etc. In fact, K3S is based on a, usually it comes with a specific version of Kubernetes and directly made out of the code changes. And K3S is divided into two things. One is a server and one is the agent that you can like something like what you can already see in the screen. So basically a server is a Kubernetes management plan component plus it has SQLite and tunnel proxy. Agent is basically another Kubernetes data plane plus tunnel proxy. So both these tunnel proxies communicate. So in order to reduce the resources required to run Kubernetes, K3S made the following changes to the native Kubernetes code. So this has been actually, I mean, if you have a look at the K3S code, they have just removed some earlier some older and non-essential code. So K3S doesn't include any non-default alpha or outdated Kubernetes features. In addition to that, it also removes all non-default admin admission controls, entry cloud providers, and storage plugins. Integrated packaging process is coming with K3S in order to save memory. And also using container DE to de-place Docker, significantly K3S reduces the runtime footprint. And also this comes with SQLite instead of ETCD as the management plan data store. And also this has a simple, I mean, like the developers of K3S added a very simple installer. So anybody can easily install K3S easily. And all components of K3S, including both server and agent, runs at the edge. This is not like the previously demonstrated QBitch. So both these things can run at the edge. So there's no cloud-side collaboration involved. So you can totally decouple cloud with the help of K3S if you are not willing to run. And if K3S to fall into the production, there should be a cluster management solution on top of K3S that is responsible for cross-cluster application management, monitoring, logging, alarms, policies, security. Yeah. So if we have a look at K3S's capabilities, it has integrated packaging process, reduced footprint as we also mentioned, SQLite instead of ETCD and it supports the, it comes with a simple installer. So this is how the deployment model of K3S looks like. And this runs a full Kubernetes cluster at the edge, which means that K3S is not a decentralized deployment model and each edge requires an additional Kubernetes management plan. The problem with this deployment model are installing Kubernetes management plan at the edge will consume more resources. So this deployment model is only suitable for infrastructure edge scenarios with sufficient resource, not for device edge scenarios with fewer resources. The network between clusters need to be connected and in order to manage edge Kubernetes cluster, a multi-cluster management component needs to be superimposed. Further on cloud edge collaboration, when it comes to Q-Bedge, it enables offline autonomy of nodes through a message bus and metadata local storage, the control plane configuration and real-time status updates that users expect are synchronized to the local storage through the message so that node will not lose management metadata. Even if it is restarted when it is offline and maintain the ability to manage the device and applications of the node, but unfortunately K3S doesn't involve this capability. So when it comes to IoT edge scenarios using Microsoft Azure, there are some other options other than K3S and Q-Bedge as well, which is called as Azure IoT Edge Virtual Cubelet. So this is basically part of Azure Edge computing, which works well with any Kubernetes master, which means that can be either as a Kubernetes service or any of the managed Kubernetes services. And this also can be used to run an edge or Kubernetes at the edge in Azure specifically. So these are the three options when it comes to Azure, when it comes to Kubernetes at the edge. So when it comes to this summary, what we have seen is that we have extensive look at both Q-Bedge and K3S. Also we had a very brief information about Azure IoT Edge Virtual Cubelet. So we can bring, I mean it seems like we can definitely bring Kubernetes capabilities to the edge and definitely with some trade-offs on the other hand, like some of the default configurations and things that are already there and Kubernetes is not there for these two platforms, etc. Both Q-Bedge and K3 are very young and excellent open source projects. So we believe in the future they will progress together in the process of learning from each other and better address the need of edge computing users. Thank you so much.