 My name is Tapit-Antrian, and I'm doing this presentation together with my colleague Fering. And since you cannot see us, here's our pictures. So Fering is the guy in the left hand side picture with the green shirt. And I'm on the right hand side picture with the red shirt. And on the left hand side you can see some of the hardware that we had in a demo last year in Belgium in the Open Networking Summit. So what is Acrino Micromech, first of all? So Acrino is an open infrastructure application blueprint collection. So basically the Acrino itself consists of a number of different projects. And Micromech is one of those projects. On this page I have a few links to Acrino and also to the LFH. So Acrino itself is part of this LFH umbrella project. Acrino had the third release last summer, and we were proud to present a blog post related to that, about the Micromech. So there's a sort of a longer presentation about the topic if you're interested in. Available on the Linux Foundation LFH website. In the introduction to this presentation we talked about the junction and sort of presented the junction as a sort of a use case for introducing the serverless platform into Micromech. So the junction is one of the biggest hackathon events, at least in Europe. So you have a teams of people, usually students, who write some code related to some challenge, some sort of application area during the weekend, they stay up all night and write code and hang around with like-minded people and then in the end of course to this award ceremony. So our challenge was related to Smart Cities. And on this picture you can see this sort of a miniature Smart City that we had built for this event. And we had a remote connection to it, so there was a web camera you could see that the city and we had 5G base station, miniature base stations. We had this, this is sort of a miniature Nokia campus that you can see here. And there was also led lights that you could turn on and off with rest interfaces. So that was quite fun. And the challenge of course was that you had to be able to write some code to servers that are running in this Smart City during a weekend. So we had to figure out how to make it very easy to start writing the code. And that's why this serverless platform was useful. So what is serverless? So it's one of the terms that I think has sort of sounds a little bit fancy and mysterious perhaps. So I have a short description of what is actually serverless and hopefully I will able to convince you that there is nothing magical about it. So I start with this sort of very old fashioned picture about application and bare metal application if it's using some libraries. There's an operating system and the operating system is responsible for handling the different hardware devices such as the CPU and the NIC card here in this picture. So then if you move to a virtual machine then you have this basically instead of running your application on a real hardware you're running your application in a virtual hardware. So you have your own operating system which can be different from the host operating system. You can have your own libraries and then you have your own application running on top of that. And there's benefits about that and then there's a set of not so good things about this. So the benefit obviously is that you're completely isolated from the real hardware here. If you run this virtual machine in a cloud then you have no idea what the real hardware is. You just have this virtual hardware and you're directly operating with the virtual hardware but you're completely free to choose your own operating system. In container it's the stack is split up in a different part so you have the on the host side there's the physical hardware and then there's the operating system and the operating system is creating the containers for you and what application developer is responsible is creating the application obviously but then also banning the different libraries and the library versions and so forth. And the benefit of this is that the application obviously has dependencies on different libraries and it has dependencies on different library versions. And then the library versions that you are using can be different from the library version that the host operating system is providing. So that gives you some freedom. And then if you're running the container in a cloud then you only worry about the application and the libraries. And then when you have the serverless you only have the application which sounds a bit fancy but of course the truth is that there's still you still have the libraries you still have the operating system you still have hardware somewhere but now you as the application developer you don't even have to worry about the libraries. You only write your application which can be very simple and very short. So the benefits of the serverless or function as a service is that it can be very fast to develop obviously because you only write your function and you don't worry about what operating system you're using and what libraries you're using and so forth. From a security point of view it's nice because the attack surface is small you're not responsible for updating the libraries somebody else will do the updates for you. And it also can be small and fast because you can have a lot of different serverless functions and they can be used in the same libraries. And then those libraries can be mapped into operating system memory directly and you don't need to run separate copies of the libraries for each application or for each function. So it can be very fast and it can scale very very well. But then obviously with this kind of approach not everything is possible and it also does not improve performance. So not only it's that you don't have some optimizations that you may be able to do otherwise but it's also that there might be some overhead involved in this. So just to show very briefly what it looks like. So what the serverless infrastructure or function as a service infrastructure creates for you it creates an arrest interface. And what you do is of course you implement what happens when somebody calls that arrest interface. So in this bash example when somebody calls the interface they will get the message hello world back. So the ASCII text. In the Python example I have on the right hand side it's a little bit more complex. The caller of this arrest interface is supposed to give a parameter which can be world for example. And then what he or she gets back is hello plus the original request that was sent. So from this example you also can guess that there are some conventions that you have to follow when you're writing the serverless functions. And also that there's language specific templates that must be used. And basically what happens behind the curtains is that the infrastructure creates a container for you and it includes the code that creates the arrest interface and it connects this function called handle in the Python code to that interface so that the request comes in the infrastructure handles it, parses it into a Python format calls this Python function, then returns the string back to the caller using the arrest API. Okay, and Ferrik will have an example about how this works. The serverless framework that we are using is OpenFast. It's an open source implementation of a function as a service or also known as it's a serverless framework. So from now, here I'm handing over to Ferrik who will show you how the MicroMech works as a live demo for you and he's also showing you how this OpenFast works with an example and talk a little bit about what are the next steps in the MicroMech project. Thank you. Hey everyone, thank you for watching our session and thanks Tappio for the great intro. We have built a tiny MicroMech cluster for this presentation. The cluster is formed of two Raspberry Pi 4 nodes running K3S. We also have installed OpenFast to this cluster. First, I will demonstrate how the cluster boots up from a network server. After that, I will deploy and invoke a couple of functions in OpenFast. And last but not least, I'd like to say a few words about the future development of MicroMech. Hope you will like it. Thank you. The timer starts when the nodes are powered up. The Raspberry's are configured to boot from a network server. In about 16 seconds, the network boot begins. The root file system of the Raspberry Pi's are mounted via iSCSI from the network server. It takes about 35 seconds until the nodes start replying for ping requests. After 60 seconds, the pods are starting in K3S. The full MicroMech cluster boots up in about 1 minute 45 seconds. Our MicroMech demo cluster is up and running. Let's see what do we have here. Beside the standard K3S and OpenFast pods, we can see a pod, which is called stream, under the MicroMech namespace. One of the MicroMech nodes is equipped with a Pi camera, and we have installed a streaming service on that node. We can check how the stream looks like via the browser. All right. Now let's take a look at our OpenFast sandbox. As you can see, we haven't deployed any function yet. So let's go and get one from the default function store of OpenFast. We will deploy node info. So in a few seconds, we can see that the pod has been started, and it is assigned on one of the worker nodes. The function also appears in our list here. If we click the name of the function, we can see that the status is ready. So it's ready to be executed. Let's do so. After less than half a second, the function has been executed on the worker node, which has been a Linux machine with four cores, and it's an ARM64 architecture. Raspberry Pi 4 node. Brilliant. All right, we have deployed a function via the web browser. Web UI of my OpenFast. So let's do the same now from the command line. I have created a small function that will grab a frame from the previously seen stream, and it will decorate it a little bit. My function is called snapper, and I will deploy it now with the fast command line tool. That's it. So the list will show that snapper is ready. So instead of invoking it here, this time I will open this function, the URL of the function in a different browser tab. All right, here's the result. We have captured the frame from the stream, decorated it a little bit, and we can see the result by just executing that function or going to the function's URL. With this, the OpenFast demonstration is over, and in the next slides I will talk a few words about our ideas for the future development of Microback. Before talking about the future, let's do a quick recap of Microback. In a network topology Microback nodes reside on the far edge or on the ultra far edge. Physically the Microback nodes are installed on light post buildings or in moving vehicles. These nodes are connected to an IP network and multiple of the nodes form a Microback cluster. Each node may have access, may actually be different sensors, cameras or other data sources. The sensors and other data sources are accessible via plugins that are implemented by Microback or by hardware vendors. Plugins provide the flow of data and control messages between the actual hardware and the Microback applications. The messaging is based on nodes. Messages are encrypted and signed to ensure security and high data integrity. Depending on hardware adaptation and support, we distinguish between high-level and low-level plugins. High-level plugin is used when the sensor vendor has abstracted the access to a sensor by providing an API, such as an HTTP REST or RTSP. High-level plugins only require container adaptation. An example of a high-level plugin is, for example, a weather station that has already an HTTP API. Low-level plugins are needed when the hardware is available via a channel API or via some other proprietary interface. Low-level plugins require both hardware and container adaptation. Such examples are like cameras that can be accessed via V4R2 or sensors accessible via Modbus, for example. Let's talk a few words about our future ideas for MicroMac. We would like to implement more APIs for different sensors, cameras, and we would like to support AI and other workloads. For a long time, we have been looking for a permanent home for a public MicroMac lab, which could be used for collaborative development and validation purposes. Currently, MicroMac has support for Raspberry Pi 3s and 4s, but in the future, we would like to enable MicroMac on multiple different boards, such as the NVIDIA JetSan Nano. Last but not least, in relation with our public lab efforts, we would like to provide an open fast cloud for microman developers. This cloud would allow cross-completion of functions for different hardware architecture. If you liked our presentation and would like to know more about MicroMac or about the team, please contact us via our Github project or look up our contact information at the Akrainoviki pages. Thank you very much. Tapio, we have got a question from Walter regarding Mac 11. So Mac 11 is yes, that's correct. We refer to the Etsy Mac spec and it will be used for service discovery. It's currently still in working progress. Had the requirements for MicroMac, question from Macrez. We currently support Raspberry Pi 3s, 3s, 4s and Raspberry Pi 4s.