 Hi, Joel. It was difficult to be here, but I am here. So let's talk a little bit of this topic. Let's say that this is like a kind of conclusion about my investigations while writing a book. So I experimented a lot of technologies in a specific 40-H. So I'm Sergio Mendez. I had that presentation. I used to do some kind of research at the university. I also work in a company called JALO, working as a DevOps engineer there. So let's get inside of this topic. Well, why serverless? Well, I know that this is a conference about KN-80, but has a specific use case for DH. So KN-80 has different components, because maybe some of us have tried this kind of features, or maybe not. But KN-80 has two features. The KN-80 we're venting and the KN-80 serving. The serving is essentially like lambda functions from AWS functions as a service, but this time it's at the edge. Basically, we are going to create functions running at the edge. I am using a Raspberry Pi just for this demo. I am going to scale down zero to create the function that I am using to insert data to some database. In general, the KN-80, when you are using serverless at the edge, let's say that you are going to reduce the amount of power that you are using in your devices. So let's say that it's less power consumption in general, less power, less CPU, less RAM. That's the deal, because at the edge, maybe the devices doesn't have like enough power that maybe they are using batteries and that things, or maybe a solar panel or that kind of elements to power the device. The challenges at the edge, we will like the power consumption. You have limited resources, RAM, storage, CPU. There are a lack of standards when you are creating a distributed system because age computing, you are distributing the data in different places across the local environment or maybe the cloud. And because the ARM processors are, let's say that ARM doesn't use like a lot of energy, it's like a good processor to be used at the edge. That's because the ARM are becoming pretty popular at the edge or for that, the specific use cases using hardware. So explaining a little bit the evolution of the server, let's say, we usually use like virtual machines using bare metal. We are installing beingware and things like that. Then the things that started to move using containers, like dockers, like a big explosion and the people that started using containers and that things. Kubernetes to create like distributed systems using containers and to manage the containers, work state containers. And then the next evolution for this kind of technologies is not have control in the servers, let's say. Just deploy small functionalities, just a function. But in this time we are creating that kind of container as a service more or less. What in general is serverless? Talking about solutions, we are using monolithic applications orchestrating containers like Kubernetes because K-native runs on Kubernetes. But let's say that Kubernetes could be like the framework to create distributed systems. That's the point of view of this presentation. And finally, the next abstraction for this distributed platform Kubernetes is to create functions as a service but you have some control in the bare metal. And you are simplifying the life for the developers. Across the time and the services in the cloud, there are like the infrastructure as a service, the container as a service, like Cloud Run or something similar. As you know, well, there's some services of Google just K-native to deploy functions on demand and that type of services. And another kind of evolution is like, let's say that is an abstraction near to the hardware to start losing control to the hardware and become simple like functions as a service. So that's in general the evolution. You are near to the hardware, you are starting to lose that kind of control to the hardware, but you are doing more abstraction to your solution in order to simplify the life to the developer. That's the way to create functions on demand. Now, maybe this is a kind of new topic for some of us, age computing. It's like a pretty simple basic concept. So you are doing the processing near to the source of the data. So it's near to the age. It's running ethics at the age. So there we will like hardware in ball bail where you are processing the information. In this case, in this presentation that I am going to show the data. Well, I have a remote control here that I am going to press, simulating that I am in the layer at the age. I am processing information at the age and, sorry. And then I am going to show some graphs about the data that I am collecting. So that's in general. Age computing has different layers. You can connect your information that is running on near to the age or near to the source of data that is running in this case in the Raspberry Pi and connect that information with the cloud. But maybe could be not the case in some specific use cases. In this case, I am using all the things at the age, but you can connect information, let's say the ideas to process all the information, transform all the information and the final resort. Maybe you can publish that information in a database in the cloud for to share reports across different countries or whatever you want or maybe connect different pieces of hardware across the world and connect that things using the cloud. So in general, you can see like four layers in age computing. The tiny age is where the sensors lives and the sensor that capture the data in this case. Well, I am using an infrared remote control. This represents my sensor that is living in the tiny age. So when we are talking about tiny age, we are talking about sensors. The far age is the place where the data is processed. So the sensors send the information to the far age. In this case, I am going to represent this with my Kubernetes cluster in my Raspberry Pi. So this remote control is going to send information to my cluster. Then the near age is like just a layer to send the data to the cloud. In this case, will be locally disinformation and if I, let's say that I want to share my reports in other places around the world, maybe I can deploy like a Grafana deployment using Google Cloud or whatever cloud provider that you want. This case, I am going to do the things locally. These layers mentioned that is the cloud layer, but remember that the cloud could be public cloud or the private cloud. So in this case, let's say that it's private, it's locally. So it depends in your use case. Putting some examples about the different pieces that you can find at the age could be like your sensors or your age devices that doesn't name that receipts. And well, let's say that Kubernetes at the age with K3S, I am going to talk about that. You can do the things as simple as you want. So what is K3S? Maybe some of us, so maybe we don't know what is K3S. K3S is a Kubernetes distribution that have removed all the things that you don't need in order to reduce the power processing, the memory consumption and that things. And K3S package all the basic functionality, all that you need in one binary. It's more or less using like 500 megabytes. It's really, really small binary. K3S is like another regular Kubernetes distribution. You can configure your cluster or your Kubernetes using one node or the multi-cluster node with a master and their workers and that kind of structure. In this case, I am using a single node configuration. K native, but this size is like our tool or the piece of software that is going to create this abstraction to create the functions using my hardware in the K3S cluster. And it can give us some event-driven kind of architecture and the power to do and other things. I'm going to talk about that. So K native is serving that is a feature in a specific for K native. It's going to give us that abstraction to run the functions using my hardware. The K native eventing, because we know when we are creating a distributed system, let's say, well, a distributed system could be how a social network is created like Netflix or that kind of things. In a social network, they are not the standards. They are not the programming whatever language you want or whatever language is useful for you. So when you are using cloud native eventing, K native proposes to use cloud events. So cloud events can give you the structure to create your functions and it's giving you like a kind of a standard to create how to call your functionalities. So cloud events is going to give you like the structure and the order in your system. I think that is not like as easy to understand sometimes but at the end, you are giving order to your system. So it could be easier in the future. Like a kind of a standard that a distributed system is lacking. So that's the reason to use cloud events. Istio, because I know that a lot of people know Istio but if you are running the things at the edge it's too much. It uses a lot of CPU, RAM and that things and I think that is not right level to install that thing in a Raspberry Pi for example because it has to reduce the resources. In this case, K native support different, let's say service meshes or proxies to create the functionality in this case. I am using contourn because contourn has support for ARM devices at this moment. I don't know if it has more support for another piece but I am using contourn in order to run my thing using ARM. So that means that when you are using a really lightweight distribution of Kubernetes K3S plus K native you have an easy way to create that H computing system that process information near to the edge. So that's the reason to combine my Kubernetes and K native without using a lot of resources. So in my demonstration, let me see. Let's move to the presentation. I think that I have, okay, here is it. Let's walk through to, here is the desk. Here is the Raspberry Pi, okay. In this side is like the master, well, it's a single node cluster. This one that is in the last one. This Raspberry Pi is simulating the H device that is capturing information and sending the information to the master node here that is running on the far edge. So let's say that this one is the far edge and this one is like the tiny edge with my remote controller. I'm not using the cloud layer, let's say, just the thing private. Then I am connecting my device that is running at the edge capturing information. And let's say what is going to happen here. Okay, let me see. Here are some leads. Oh, I think that looks better in the mirror here. So I have three leads. The yellow lead is going to, let me see if I can move it here. I think that it looks like the three colors, okay? So the yellow is like, my main program is running at the edge. I am not running the thing using containers, just pure Python script. So the yellow lead is showing that it's running. The green lead means that I am pressing the green lead. The green button here at my remote controller, here is it. And if I press the red button, it's going to turn on the red lead. So let's restart the thing, okay? When I press that, there is a function that is created by demand as you saw in that moment. After two minutes or one and a half minutes, it's going to be out of the scale down to zero. So right now, this thing is going to capture the information and to show a graph and a dashboard here. I am running, well, here is the proof. Let's say I am running a master. This is using, let me see, get notes. Just to show information about cluster. Yeah, as you can see, it's using a Raspberry Pi kernel. Just to, right now it's running at the edge. That's the proof. Here I am running the script. Let's power off this thing. Let's create it again. It's running. Right now, let me see. Well, it's running the red lead. Where is the control? Oh, here it is. Okay, so let me press my control. Oh, right now it's terminating the thing because I am not using the functionality. So it's scaling down zero. In that moment, you are saving processing resources. So, well, it's in an idle moment and right now let's say, well, an event happens and I want to process the information. Okay, let's press the re button here and it's created by demand, yeah? And then it show it in this graph and a dashboard. So let's play a little bit. Let me show this screen maybe here. Okay, let's put here the thing. Let me, I think that that's okay. Okay, let's put it in this place. Okay, so let's press. Ah, you can see here my finger. The green is going to change to the green. Let's wait for some seconds. Ah, wait, I have to point to my device there. Okay, green. Okay, let's go to red. Let's power off the thing. That, let's say that one is on, zero is off. Let's push the green again. Let's push the red again. Several times the red is going to reflect in the dashboard a lot of times. You can see that there are like a lot of times the red, a lot of times of green. Let's see, a lot of times of green. And let's off the thing. And right now, if we can wait, it's going to scale down this function. Let's wait for a bit there. Let's continue with the code. What happened in the code? Well, the code is like a pretty basic example. I think that some people likes to play with Kubernetes and create bad things just for hobby. But where you are doing the things, what hobby you can discover things to resolve problems. That's the cool stuff. You can expect, well, let's say right now the Raspberry Pi's are expensive, but you can use another device similar than a Raspberry Pi. There are other devices. So here is like pretty basic code. Let me see. No, it's still running, but it's going to scale down zero. So just connecting to my SQL database, inserting data, and that's it. If we explore the dashboard, it's just a SQL query here. Just as a leg in a table. Everybody knows to use my SQL maybe one time, so it's pretty basic to create that query using my SQL. And that's it. With that, you have all the power to do that. Right now it's terminating the function because I am not using the function. And it's going to scale down zero. So zero post there. Let me see. It's still deleting the thing. I'm really pretty basic example of that. Here is a container that runs my metric because this thing, the metric part is running there. Let's explore a little bit the dashboard. I think that I have here the whole architecture. Let's explore that. Okay. So doing a kind of diagram to exploit the thing, that's the H device that is receiving information from the infrared remote controller is going to turn off or turn off the LED. It's running that thing, that kind of logic using Python. And the single node cluster is going to receive that information doing a request to some API that is running. That API is this one created using Flask and a pretty basic Python example. And it's running as a service in Kubernetes. Let me see. Where is it? Okay, here it is. Okay. Then this function insert the information to my SQL that is running at the H2. So my Raspberry Pi is running my SQL, K native and the functions, all the things. And I am running the things locally. I am not using the internet. I didn't deploy anything in the cloud. So all the things are running locally because I have like, if you can see, there are like a, let me see, a switch here to connect all the Raspberry Pi. So pure ethernet connections. Let's continue. Okay, that's basically my architecture. This API was created on demand using K native server and all the information my SQL is showing with Grafana. What it looks is how it looks right now, more prettier. This is like a connection. Maybe you can remember when we are studying engineering in the university. Pretty simple example. Turning on, let's, turning off, let's. So these ones are the slides. I think that I went to share that what's a pretty, what's difficult to do this kind of presentation, why? Because of the internet connection, I don't know what connection is here and whatever. And that's the same challenge of doing H computing systems. How I can manage my IP addresses, that's the reason. So I buy a switch that maybe costs me like $10, more or less. And I use the wireless connection and the ethernet connection, but I only use the ethernet connection was static because Kubernetes is using needs and static configuration and a reliable network configuration. And the only thing that changes is my wireless connection. So could be any IP address or whatever you want. I tested this a lot of things because I failed in some presentation, but I didn't count on consideration this kind of changes. But as you can see, Kubernetes in the presentation of K3S can manage this kind of challenges. So that's a little bit tricky to perform this kind of presentation, but runs. And that's the way that H computing works. You have to connect the device and have to run. So that's the challenge. Here is the repo that I am using for this demo. It's just a source code right now. I have the instruction, but I have to finish that part, but we'll be ready tomorrow. My personal information, if you want to contact me or you want to share your experiments with me. I think just for fun. I think that if you have questions, you are curious about that. I think that, but you have the word. Do we have any questions? Did you do any measurements to see how much power was saved between the demo files? I think that I can. Well, let me see. I think that I have something here. Let me find it. Let me see. It's not this part here. Okay, this command. Knetif has two presentation as Kubernetes. You can do the things imperative or declarative with the key and command line. Well, in this example, I am using the imperative way to do the things, but this command can output the jammer to create the function. And it's pretty similar of the structure of the regular Kubernetes deployment, but that's another option, like the amount of time to scale down zero and that kind of thing. So I think that I can add that kind of jammer file in the repository too. In terms of electricity, like how many watts are in apps that you save? I think that for example, well, in a specific for a Raspberry Pi, you can get that information using some libraries to get like the amount of resources and measure the CPU and that kind of thing. So you can take in consideration how many watts per hour, what is your connection? And using the time, you can do like multiplications to calculate that information. Now about the consumption. But I think that the way is like, for example, when you are using things like data drivers, things like that, well, you only have the time, the amount of watts per hour, and you can measure the CPU and the memory and do some play with that number. So it sounds like the main purpose of this is just to make things easier for development. As I imagine it's always going to be more expensive in terms of resources to run Kubernetes other than just like as opposed to just writing a native app for the Raspberry Pi. If KNET could be the things easier to do, I think at the edge, let's say, well, I think like talking about the standards, let's say, well, you have to think how I can call this function, what I can I do? And if you adopt cloud events, that is a way to create the things in KNET. I think that maybe the learning code will be higher, but at the end you have ordering that and it will simplify the way to call the things. So at the end it's going to help you in that way. I think that could simplify. I think that KNET because it functions as a service is more developer-focused instead of doing and managing the infrastructure because it's serverless. So I think that in general it's going to help you. I didn't do like, well, let's say in the real world, we have a team, maybe there is a DevOps engineer there and you are the developer. So you are not going to touch maybe the infrastructure. Maybe you are only to use the KN command line and the jammel files and that's it. So at the end, we'll simplify your life. Sometimes there are some just cases because maybe data scientists of scientists maybe want to implement an age computing system. But then the end, well, maybe they want to ask to some DevOps engineer or something like that to do the things by himself or something. But I think that it's pretty basic kind of structure compared with another kind of tools. Let's say OpenFast is like, well, it's different. It's different. But I think that KNET gives you the event-driven thing that is pretty commonly used for age computing. So let's say if you use a tool like OpenFast, you are missing the part of event-driven structure. So I think that KNET gives you enough tools to create and to simplify or to spend less time thinking about how to build a system. Yeah, I have another question. Have you considered to use KNET if eventing too on the edge like broker or because you're event-driven? Well, KNET uses a broker behind. In the KNET eventing is another feature that has to create even driven architecture. You can use Kafka. Well, Kafka, I think that is too heavy for the age. KNET has some broker that is integrated inside. I don't remember the name right now. But it's in memory. Yeah, the KNET memory. So it's like pretty, pretty lightweight that you can use it at the age. But let's say that this is just an experiment at the age but you have, if you want to create something bigger, so maybe you can use another broker like Kafka or Knows, or maybe you can create your brokers in the cloud just to manage that the service is reliable and complement the things with the hardware. So you can do a lot of things. It depends on your use case. But of course, you can use brokers. Any more questions? All right, thank you, Professor. That was brilliant. And I wanted to remind everyone to please keep on voting on the SCAD app. And someone asked me earlier about where they can access the slides. The speakers would be updating the slides in the SCAD app itself. So there should be a PDF available to download maybe today or tomorrow. Right, so let's move from edge to node less. Our next session is node less.