 Hello, this is John von Wurst, I'm the Director of Cloud Solutions here at SUSE. Today we're going to do a presentation on Kubernetes on the Edge and do a quick demo of an ARM-based implementation. Thorsten, why don't you do a real quick intro? My name is Thorsten Kukuk. I'm the Simulist Engineer at SUSE. I'm the Senior Architect for the SUSE Linux Enterprise Server at MyQS and I'm leading the future technology team. Why would you consider putting Kubernetes at the edge of your network? Well, Edge computing tends to be IoT type workloads with special requirements. In general, it's less expensive to put things in the data center or cloud than it is to put them out at the edge. The three main reasons for edge computing are usually network related, which are latency bandwidth limitations and cost. Edge workloads need to run when the network activity is not guaranteed as well. You don't want your factory going down if you lose your internet connection. Now, going forward in the next five years, 75% of all data will be generated outside of the data center. And in many cases, that data needs to be pre-processed or filtered at the source versus being sent to a central location. As you connect these systems to the internet, you need to worry about security. There's a lot of malware out there now that's targeting IoT type devices. And it's imperative that you make sure that your connections are secure and that they're managed the right way. In addition, companies like Tesla have taught people to expect updated feature sets delivered over the network. The old method of installing firmware on your Edge device for the life of the product is no longer acceptable. On top of that, managing thousands or tens of thousands of devices the old way is no longer feasible. You can't roll a truck every time you need to update a device or send a technician. You must leverage IT technologies to deploy and maintain these Edge devices with complete security. Methodologies must be DevOps friendly to enable rapid development. You must be able to deploy the same way regardless of location, whether it's at the Edge in the cloud or in the data center. And this is what containers delivers for you. You also need policy-based scheduling management to enable you to roll out patches and updates over time versus all at once. At SUSE, we work with several automobile companies. One use case for Kubernetes at the Edge we're working on is manufacturing equipment monitoring and maintenance. The requirement is for real-time quality control measurements to avoid downtime. This helps transition from waiting for things to break to proactively scheduling their maintenance. Our customers are running SUSE Linux for ARM on industrial Raspberry Pi 3Bs. And interestingly enough, a good portion of our Edge activities revolve around the Raspberry Pi. Our customers have built their own monitoring and signaling software with a touchscreen for their worker interface, and they do all their communications over Wi-Fi. The ARM system on a chip architecture is prevalent throughout IoT and Edge implementations. We work with a number of Edge equipment manufacturers that base their products on ARM core microcontrollers with SUSE Linux for ARM running on the CPU. If you're considering a project, please contact the specific hardware renderer to make sure they have tested our operating system. This ecosystem changes rapidly and there's a lot of innovation in this space. One of the components of our Edge container strategy is SUSE's open SUSE Micro-OHAS. This is an offshoot of our open SUSE Linux development process and is specifically tuned for containers. For Edge architectures, a small footprint is critical as most devices don't have full-blown server class hardware. SUSE Micro-OHAS is focused on performance at the Edge and in container environments. We've taken the scalability and security lessons we've learned from Enterprise Class Linux and applied them to this new container operating system. Security is there by design. Open SUSE Micro-OHAS is an immutable operating system that cannot be altered during runtime. This provides the same experience every time a system is booted. Systems built on Micro-OHAS are scalable because there's no configuring individual instances at runtime. Additionally, any time an update fails, you can always roll back to the previous working instance. This greatly reduces the risk of maintaining your devices. Because open SUSE Micro-OHAS is an immutable operating system, the file system is read-only. Atomic updates mean that any changes are made in full or not at all. The system will roll back immediately if the update is not successful. Open SUSE Micro-OHAS uses the ButterFS file system, which gives us very efficient storage and snapshot capability. All of the configuration files stored in the et cetera directory are included in the Snapchat and are rolled back if the update didn't take. Open SUSE Micro-OHAS is flexible with no new package formats or size limitations on the partitions. Now I'm going to turn the presentation over to Thorsen to go through our demo. I will now do the second part, explain the demo setup and then run the demo. If you speak with customers, the requirements for the et are very often very similar. You're going to have an immutable OS because these machines are very often somewhere outside and you need to send a technician to it to repair if something goes wrong. So it should be very robust, reliable, self-healing, and for this we use open SUSE Micro-OHAS with a containerized workload. Unreliable network, if you think about trains, if they are in a tunnel, then they have no network connection anymore, but the whole thing needs to continue to work. As you will use Raspberry Pis for this demo and they don't have a real-time clock for correct time and date. We use an external real-time clock so that certificates and similar things always work. And the whole thing needs to be used without an extra monitor. You don't want to have a big rack full of monitors for such few small Raspberry Pis. So you need an LCD display which follows the status and IP address. Demos always provide a lot of buzzwords and of course you also want to mention a lot of buzzwords so that everybody finds something he needs, he knows in it. So we will have edge, we will have Kubernetes, we will have container, we will have H.I. and machine learning. We will have ARM in this case Raspberry Pi and for cluster management we will use salt. The setup looks like in the middle we have an edge gateway which is connected to the Internet. On the one side, on the other side, it's connected to the Kubernetes cluster. It has a local registry from where the cluster can pull the container images. The edge gateway can get fill the local registry either via the Internet if there is a connection or you can fill it up with a USB stick. Then there is a load balancer. There is Assault Master, there is Cubic-D. Our daemons coordinate the deployment of Kubernetes with the help of Kube-ADM on the cluster. On the one side, on the other side, it's also used to manage the host OS. And the communication with the front-end is then via GRPC which allows encrypted communication with certificates and role-based access control. So, we will use Raspberry Pi for cluster and deploy Kubernetes on it. We don't need external network but we will have a correct clock on all Raspberry Pi so that the certificates from Kubernetes works. It's usable even without a monitor and we will use OpenVINO for H.I. Beta is an OpenCV and DLTT sub-project. The hardware is a nice small tower of four Raspberry Pis. Each has four gigabytes of RAM. We have an LCD display for lines at 20 characters. This follows the status and IP addresses. So, if you are in a different environment, you only need to look at it under which IP you can reach your cluster. And it has a real-time clock. So, let the Raspberry Pis have the right time always. The same. So, the first Raspberry Pi is our edge gateway. We will use and open to the microS container host image for it which will be personalized during the first boot with help of ignition and combustion. It has two network interfaces, the internal and external one. And the LCD display is connected to it and follows the IP addresses and status of the cluster. All services run as a container on this node. And Chrony is used as time server. If the network is online, we will synchronize the time via the network as we use the local external real-time clock. And of course, the salt master needs to run on this node so that we can orchestrate everything. For the Kubernetes nodes, we will use open-through-the-cubic images. Open-through-the-cubic is our certified Kubernetes distribution. As hostOS opens the microS is used. The initial setup will also be done with ignition. The ignition will set up the salt minions and make sure that they connect with the salt master. And it will use Chrony as client. The containerized workload at the edge gateway is the most important part here is the local registry. This allows to deploy the cluster even if the network is flaky. Or we can specify exact version numbers of the containers tested. Provides them via a USB stick and similar things. Then there is the repository mirroring tool, RMT. If you know the Linux Enterprise Server, then you will most likely use already RMT or the older version SMT. This allows to update an external network. It can also be filled via a network or USB stick. And you can select the RPMs that should be there. So test the RPMs in your data center by QR. And if they really fulfill your requirements, put them into the tool and then all the nodes will update themselves from it. You have a name server to be independent of the network. You have a DLCP server to automatically set up the rest of the cluster. Optionally, there is a web server so that you can provide the configuration files for ignition via the network and not via a USB stick or something similar. And there is a script proxy so that you can allow access from the cluster nodes to the internet via access control list and similar things. Raspberry Pi CPU is most of the time fast enough for simple desktop environments. But if you do RE machine learning on it, then this CPU is clearly too slow. So we will use the internal computer stick tool for this to boost up the CPUs. And there you need to consider some things. One is you need to teach Kubernetes about special hardware or resource so that it will schedule the right containers on these nodes. Not too much containers needing this resource. So if you have only one stick, don't schedule two containers who need a stick on one node. But hope that there are enough nodes in the cluster which each have a stick and can then run the container. There is a Kubernetes device plugin available for this USB stick. Unfortunately, the driver is binary only and only available for inter architecture. Not for ARM. Though we hacked the myriad driver, which is an open source driver, plug in and separate it into parts. There is a tool to boot the device if you plug it in. And then there is a plug-in part which is not doing the boot of the device but only accessing it. The reason is that for some unknown reasons, this USB stick cannot be booted out of the container. To show you the difference between using only the ARM CPU of Raspberry Pi 4 or the neural computer stick tool, CPU only a tool part from the benchmark tool from OpenVINO is about 7.5 frames per second. If you use a USB stick, you have 263 frames per second in average. That's a huge difference and worse investigation in special hardware. So at the end of the demo, I will show you a classification example. I will upload a picture to the cluster and we'll get a feedback. What most likely is you will see on this picture. So in Fort, the demo, the steps I will present. At first, install the Edge Gateway load belongs on the Raspberry Pi 1. Also, install the container services and the salt master on it. Install opens the cubic on Raspberry Pi 2.2.4. Need some nice kubernetes master node and add the worker nodes. I will show that it's really working with a Fort HelloCubic test demo. I will install the internal computer stick on the Raspberry Pi 4 and then run OpenVINO to classify an image. And now let's start the fun part and do the presentation, the demonstration. The first Raspberry Pi I want to install is Edge Gateway or load balancer. Because on this machine, we will run later the name server, the DHCP server and the local registry mirror. Which we need to automatically set up the rest of the Raspberry Pi cluster and the workload. And now we only need to create an image for a USB stick which personifies or configures the machine at the first boot. And this includes installing the missing Pactic filters, set up the network and the password so that we can log in all the network. We use Ignition for it. The easiest way to get a configuration file is to write a YAML file. In this case, we log the password so that you cannot log in via a password but only with an SSH key or a network. We create some files for the network configuration. The hostname ETH0 is an internal cluster network interface. ETH1 is an external network interface over which we can connect the cluster. The LCD netmon is the utility which displays on the LCD display the current IP addresses of the network interfaces. And at least we create a configuration file for Kony to use the local hardware clock and not the network. Because we don't yet have a network or the network is unreliable not always there. So we don't want to trust the network but provide our own real-time clock. Signs Ignition itself cannot directly pass YAML but needs a JSON file. We create that as next. Ignition is not able to install packages. And since microS has a read-only root file system, it's also difficult to install it during the first boot without an additional reboot. So that's why we use combustion for it. We have a script. It's called script2 and there is a combustion that requires a network directive. So that combustion will set up the network for us. And then we can install the missing packages we need for our gateway. Now we only need to insert the USB stick into our Raspberry Pi and boot it for the first time. After our edge gateway is booted we will now configure it and copy all needed data on the host for our services run there. Now the first time we need to log in so we need to enter and make SSH happy. You copy the bind, the HCP and HAProxy configuration to the host. Additionally there is a registry, RegistriesCon file which tells cryo which we use for Kubernetes. We have to pull images from, so in this case we map the official cubic and openZoozer, name spaces from the registry openZoozer org to the local registry. This file will be copied to the salt directory so that we can later distribute it to all Kubernetes nodes in the cluster. Now let's log in. As you can see, root has no password, so we cannot log in with a password on the console. Only via SSH key on the network. Let's set up the local registry. At first we copy the default authentication and configuration file to the local directory so that we can adjust it for our needs. Then we need to create a password. In this case I will only use admin because this is quite simple and from a security point of view it doesn't matter here because nobody except me has access but in your environment you should of course use something more secure. So adjust the configuration file with a password. Now for what is this file? We have an RCL list. Admin, if authenticated via a password, is allowed to do everything which also means pushes images to the registry new ones. Everybody is allowed to see the catalog and to pull images from it without password so that everybody can pull images in the cluster but not replace them with something different than admin ones. Now we have a second network interface. We have no name server yet running so we need to add the second IP to the etc host file and let's set up the container registry. Since no certificates are provided by me for this case you will generate self-signed certificates and start the containers. This will take a little bit longer since you have to pull them from the network. Portman PS will show us they are running and if you list the catalog so you see on the local host you can access the registry and there is no image yet. There is a nice tool in your registry which will scan the official OpenSuser org registry and use some regex to create a list and a YAML file which only contains the newest version of an image in the registry. In the official Qubic and OpenSuser Qubic namespace. We use a scope PO to sync the images from the registry to the local registry. In this case we have network access so we can pull them directly from the registry and push them to the local one. For this we need to provide the admin account and the password admin. If you don't want to have network access or don't have network access in your edge area you can also run the command on another host in your data center. Copy all the images to a local USB stick. Go with the USB stick in your edge data center or to your edge machine and then use scope PO to copy the container images from the USB stick and push them to the local registry. Now you can see that scope PO is pulling one container after the other and then pushing it to the local registry. Now copy all images to a local registry and if you list the catalog you will see they are copied. Now we can configure and run the name server. We need to tell the name server where to find the configuration file and start the container with the name server. Tell our local network configuration that it would use the local name server. Next step we configure our DHCP server. So eth0 is an internal network interface where the DHCP server would list the start server. And the last step will be to start the saltmaster. The saltmaster needs to run the OZ. All Raspberry Pi cluster nodes can automatically register to it and we don't need to log into the cluster nodes to do something but we can do all remotes via salt. For the nodes we will use openthecubic which is open to the microS with Kubernetes on top of it. The certified Kubernetes distribution and it will be configured in the same way as microS because it's a microS SOS below it. So we have the password and SSH configuration, the network configuration. We configure the saltminion and tell that all Raspberry Pi's did boot and configure themself and register with the local saltmaster. We need to accept them at our saltmaster for secure reasons so that an intruder can register its own saltminion. And listen to what we do or send its own commands. Since we use a local registry with a self-signed certificate, we need to distribute the certificate to all hosts in the cluster. We do this via salt so that we don't need to copy to every single machine but salt will do it for us. We update the CR certificate. Normally this should be done automatically by systemd but to be sure we will do it additionally on our own. So copy the registry's configuration file for cryo and restart cryo everywhere so that it really uses our local registry and not the one in the network it cannot reach. We also need to initialize cubicD. Our daemon we use to coordinate the cubeADM calls on the different nodes via salt so that we can bootstrap a Kubernetes cluster. We also need to log in on every machine and copy and paste the correct cubeADM commands. We need an HAA proxy and now we initialize the node RPE for dev2 as Kubernetes master. This will take some time. It's a single master setup with VFS network and a containerized control plan. The Kubernetes reboot daemon is used by us to do a controlled reboot of the whole cluster. Add the two remaining nodes and put them to the Kubernetes cluster. Now we will deploy Metal LB as load balancer. Now we have a fully working Kubernetes cluster. Finally to see that the cluster is really up and running. We will deploy a HelloCubic example machine and as you can see Kubernetes is up and running. Some more information about the cluster. I'll get nodes for our nodes all ready. You can see the three HelloPot services and the full list of all of our nodes running on the cluster. Since the ARM CPU is too slow for such an ITAS, I plugged in an Intel Neural Compute Stick 2 into one of the raspberries. In this case I used the node 4-4. You can see it's plugged in. Now since we cannot boot this USB stick out of the container, we need a small utility which will do that for us outside of the container. There's a service which will do that. Now the stick is booted, as you can see here. Now we only need to teach Kubernetes that there is a special node with a hardware extension. And for this we need to tell it about it. So the script is patching the Kubernetes information and adds a resource NCS2. The most important part is the resource definition here. Let's make sure that there's exactly one container per stick because it's not possible that two containers can use the same stick at the same time. Now it's deployed. As you can see there's an OpenMino deployment running on node RPE4.dev. We have an image script, classify image. It's sending to the cluster an image. I forgot to mention. And that's the output of the container. Okay, this concludes our demo. The results show a 67% confidence factor that the image that was submitted is in fact a sports car. We're ready to take your questions now. This video will be available as a download and there's a link here that shows where you can get all the files that were used to create this demo. Thanks for attending and look forward to your questions.