 Hi, welcome to this session. Well, my topic is building a multi-node cluster with K-Trace using ARM devices. In this case, I am going to use Raspberry Pi devices. Well, so welcome. A little bit about me. I am an operating system professor. I am a cloud native enthusiast. I really love the cloud native technologies, not just Kubernetes. I also do some DevOps app java. We are working on some chatbots using Facebook and that kind of things. I am also a LinkedIn ambassador. Right now I am writing a book about edge computing using K-Trace edge computing. Well, maybe you are asking why I need to build a multi-node cluster using ARM devices. Well, the reason is edge computing. Well, edge computing refers on where the data is processed and the goal of edge computing is processed information or data near to the source in order to reduce latency when this information is going from the servers on the cloud to the client. So you can reduce this kind of latency if you process the data near to the edge and then send it to the customer and you are going to reduce latency in that way. So edge computing refers to that part and it's something that the companies are starting to use in order to reduce latency and speed the process and improve the user experience for their products. Edge computing refers to different layers and when the information is processed and this have like these four basic layers. In the tiny edge, you are going to find the sensors, the devices that are collecting data are called edge devices. On the far edge, these devices are going to send the information to the far edge. Here is where we are going to build our ARM cluster using K-Trace. So this cluster is going to process the information and then if there is some information that needs to travel to the cloud, it's going to pass through the near edge and finally goes to the cloud layer. For example, with this data that you are collected from sensors, maybe you want to create a machine learning model, but the processing is a little bit intensive. Maybe you want to process this on the cloud and then the edge cluster can download this model and can serve to the clients or to your customers without too much latency to be processed. So that's the way where you can find the cloud layer, the different cloud providers, or maybe on premise clouds, your private clouds, these kind of things. So these are the basics of edge computing. Well, some examples of just cases for edge computing could be like smart farms, small gardens, maybe you want to create a small smart garden in your house and maybe you want to create an alert system. Federated learning in order to pick data from different sources, then create a machine learning model with this all this information that is distributed on different locations. Maybe you want to create a geolocalization application or do some geotracking like ways and these kind of applications, but maybe you need to create something more custom for your business. You can also get this information with machine learning, do some predictions, the forecast, and there are different kinds of applications. Also the companies are starting to migrate to this philosophy to process information near to the edge in order to improve this user experience as I mentioned earlier. So I mentioned it. Well, right now, if you are using containers for your system, the way that build a distributed system using this kind of mindset of edge computing could be Kubernetes. Kubernetes could be your platform to create this distributed system using containers and gives you the whole control of all the things that you basically need to build a distributed system using Kubernetes. It's going to orchestrate your containers and manage your deployments, your network, your storage and everything that you need to create your edge system. Well, K2S in the other side is a Kubernetes distribution oriented to work on the edge. Well, maybe can help you to connect the things to the cloud, the IOP thing, or maybe to process the things near to the source of the data. Well, K2S is going to help you in this way. K2S, all the components comes in a single binary that includes the whole components or let's say the essential components to run a Kubernetes cluster and could be like, well, two binaries one for the server, other for the agent and contains all the necessary things to run a just full Kubernetes installation cluster and includes all the components. If there's something missing, you can integrate this missing components, not like missing, but they are removing it because in an edge environment, it's a low resourced thing and maybe you don't need the full services that a regular Kubernetes cluster is going to use. That's the way that K2S is so smaller, comes in a single binary, has a cool feature that includes you different backends. Let's say no, not just only ETCD SQLite, MySQL is going to replace this ETCD in this K2S and you can also deploy your backend on the cloud. Let's say on Google Cloud, Cloud SQL that is connected to the cluster that needs to storage all the configurations instead of ETCD could be useful in order to have high availability in some way, but you have to have like a nice network connection, the internet connection that will build your point of failure. K2S also includes by default an ingress controller traffic, but you can install in a engine xingress and other things, but you have to be pending if you are using ARM devices that this so far has to support ARM. You also have some Helm object by default inside the cluster, instead of installing things using the Helm command line, has their own object of Kubernetes for K2S. Use this Flannel for networking and use container before containers. Well, K3S is also a CNCF sandbox project. Well, talking about ARM devices, they have some balance between processes and energy consumption is something that Intel has a lot of power but increases the energy consumption that the reason why the smartphones are using ARM a lot, so has that balance. That's the reason why ARM is often used for H computing systems. In this case, we are going to use Raspberry Pi devices using ARM processors. Talking about this Raspberry Pi, you can use right now for in this year, ARM different versions of Raspberry Pi, we are going to use Raspberry Pi version B, 3B and 4B. The 3B has support for ARM and B7. This means that it's going to support 32-bit applications and B8, 64-bit applications. We are going to just use the B8 using the Raspberry Pi for B. And you can also use in this Raspberry Pi, different distros, Raspberry and Ubuntu Alpine and others. But what we are going to use Ubuntu is more like standard or something that you are going to find on production. Cluster is not Raspbian and maybe can't give you some advantage in that way. But you can use whatever distro you feel comfortable. Alpine is really minimal, but needs a lot of extra work. Well, in Ubuntu a little bit, but not a lot of extra work. So what do you need to build your multi-node cluster? Well, you need fast micro-SE cards. It's like the hard disk for your Raspberry Pi. You need like the micro-SE cards with the best or the fast read and write feature. You need Raspberry Pi image or Valena HR to flash your Raspberry Pi to install your distro and your Raspberry Pi. Maybe you want to buy a cluster case for your Raspberry. And a small switch to connect your Raspberry to the network and an internet connection. This will look something like this. I am going to show you right now how the clusters look on real time. I have connected here. Let me show you my cluster is here right now. You can see my hand here. Well, have some switches. I have the switch in the case. Some switch to turn around and turn off the Raspberry Pi. And maybe an HDMI to see what is happening on your Raspberry Pi. And really pretty cheap configurations to build your cluster. You have to keep in mind that you have to build all your applications to support ARM. Don't worry if you are using containers. Locally, you can build this using Docker Vilex. And depending how you are deploying your things is the right tool that you are going to use. For the cluster configurations, you can have like a single node cluster that could be like the majority of the cases for H computing or maybe a multi-node cluster to process some intensive things near to the edge. The cluster, the multi-node cluster on K3S is going to use a master. The worker nodes are called agents on K3S. Now you can see how it looks when you are showing your cluster and it's working on the far edge. How the tiny edge have your edge devices with sensors that send information. How can interact the cloud layer with, well, this edge cluster in the far edge with different cloud providers. The software to install this K3S, Metal LB as the bare metal load balancer, Longhorn, and maybe we want to install engine xingress controller. The challenges of using this kind of technologies can be like, well, we have to learn how to communicate the sensor with edge devices, which libraries with language we need to use. The duration of these sensors, how they perform on indoors, on weather conditions, drastic weather conditions. Your software have to support ARM if you are using this kind of devices and how to reduce cost about how to power your devices. So right now, let's create the cluster. These configurations represent the cluster that you saw on my camera. I have a local network with this network, a private network. I have a switch. Some IP addresses are just for reference. The one that we are using is the master, has that IP address, the 0.11, 0.12, the agent one, and the 0.13 for the last agent. Well, this is our network and let's go into proceed. They have installed Ubuntu. I think that there's a 21, a 20, or a later version, don't worry. The cool thing about installing Ubuntu is that gives you support of B8. Sometimes Raspbian doesn't detect this thing or doesn't support that thing. Well, you can try the images of Raspbian, they are in continuous updating process. Well, in this case, we are using Raspberry Pi for B with four gigabytes, well, two for four gigabytes and one with eight gigabytes. And fast micro SD cards with 32 gigabytes. To prepare the device, you have to install Ubuntu, configure the network. You have to enable the C groups when your Raspberry Pi is like, you have to send this kind of parameters to the kernel. So you have to add just this small line at the end of the line that is inside this command line TXT file. You have to configure the host. Right now we have configured all the nodes with this part. Well, so let's move to the practical part. We are going to use this small script to install the master node. It's going to set the IP address for the master. I am going to install traffic by default. It's going to install the service LB because we are using, we are going to use Metal LB to provide load balancers. So let's do it. But right now I have all the things prepared. Let's connect to the master node. Okay, let's set this variable of the master. Let's run the first line to create the master node installation. Just let me fix my chair. Okay, so right now it's installing the thing. One line to install the things you can even use tools like catch up to install K3S and all your cluster. Right now let me see something. Okay, I think that you can see all the things here. Okay, it looks good. Well, let's extract the token that is going to be used to connect the agent nodes. Well, I have to add this line, the pseudo. I'm going to copy this token because it's going to be used to connect my nodes. Okay, everything right now, we can show the nodes that we are using right now. Okay, there is machine one. Well, the master is called machine one. The other nodes are machine two and machine three. Okay, let's move to the first agent. To the first agent. Let's set this variable. Okay, and the master IP, another variable. We have this, this is the IP address of the master node. And let's run this line. This line is going to connect the node. Well, it's going to start K3S agent things and then it's going to connect to the master node in order to appear in the list of the nodes or Kubernetes. So all the things are working. Let's do it. Only one line to install the thing. Right now, I don't have set SSH keys to login without password. Just simple installation using a regular password, but you can do it in that way. Could be more secure. Okay. Let's see here what is happened. Cube CDL, get nodes. Now the machine two is being processed. When it's ready, it's going to show ready here. Right now it's ready. Let's move to the last agent node. Okay. Let's see here. Well, here's the part of the master node, as I show. We are right now in the part of the agent nodes. I have been connected in the master node. I extract the token, then I connect to the agent. So I configure the master token and right now I am connecting and creating the agents. I am in this step. So let's create this one. Okay. The master node here. Let's create the token variable and then let's run the last one liner here. All good is running. I have a pretty decent internet connection too. And right now I think that everything is working. Let's move to the master node. By default the master node with these command line options that I just said is going to install kubectl. So right now we are going to wait for the third node. It's ready and it's running and it's awesome. And right now let's move to the next part that we are going to install Metal LB. We are going to create the namespace. We are going to create a config map that has the configurations where the load balancer are created. If you remember we are using this network but we are just using this range of IP addresses just for the load balancers. Yeah. Between the 240 and 250. So let's run these command lines to install Metal LB because right now K2S can provide load balancers. This Metal LB is a bare metal load balancer service. Let's create the namespace. Let's create the config map. This is pretty important. The namespace of this config map if not Metal LB is not going to work. And last install Metal LB that is going to read this config map. Okay, right now we have to wait a little bit. We can continue with, let's wait a little bit. We can continue with Longhorn. We are going to install Longhorn in this way. If you need read and write many support you have to install NFS common library in your Ubuntu on all the nodes. And finally you can extract the config map. And last we are going to test some simple deployment that showed that everything is working. Now let's install Longhorn here. Just one line to install the thing. Here's going to deploy Longhorn. I think that is too much. Two nodes using four gigabytes and one node for eight gigabytes. It's like a lot of power here. But it's really cheap to implement this kind of clusters. So right now all the things are working. We can see the different namespaces that this thing is creating. Longhorn is here. It's going to show the different deployments that is working right now. And later you can use these storage classes that are installed in Longhorn. So this is installing the things. You have to wait for that and everything is going to take some time. Just for demo purposes. We are going just to install the thing. Not going to use Longhorn here in this demo. And let's see. I think that everything is working. Let's move on this step again. Well, we are going to run this just to test the different configuration. And let me show you. Let's create a simple engine X deployment. It's going to be, it's going to be provisional. Gem is going to expose the deployment of this engine X and it's going to create a lot balancer between the range that we define on Metal LB. We are going to access this on the port 8,000. Okay. So let's see, get the deployments. Here is the engine X is up. Let's see the services. It's a service with a lot balancer. Is this IP and this port? Let's explore this here. I think that is this port. And no, sorry. Is this IP address? Let me complete the thing here. And it's working. So right now you have, you sell this lot balancer server, this bare metal lot balancer service from Metal LB. We are running K3S and all these things and it's working. So it's pretty, pretty easy to prepare your devices to run K3S and ideas that you can explore. You can, let's say you start connect R2ino to your Raspberry Pi, just keep to connect your Raspberry to read from sensor is something that I am working right now. Or maybe you can use ESP32 devices. Maybe you want to explore the Lora wireless protocol is really, really nice because just low power consumption and you can send data from two to 10 kilometers of distance. So could be really interesting if you want to explore the Lora wireless protocol is something that is moving right now around the world, the IoT things. And maybe you want to decide or discover different architectures where you can interact with the cloud, with the cloud providers, which services will be better to deploy on the cloud, which one near to the edge on my edge cluster and these things. There are some resources that you can explore by yourself the official page of K3S, Rancher, the official documentation, the slides. If you want to explore my slides from the age, let's add my Raspberry Pi here is the link, you can copy this. I think that for that, thank you very much for all. And maybe we can move to the part of the question. Thank you very much. And of course, you can follow me on the social networks. Here's my email and my Twitter, follow me on Twitter. And let's move to the question. Thank you very much for this event about age computing and Kubernetes. Bye, people.