 So, hello everyone, Haralambos or maybe Babis from Nubifcus, and today we'll give you a short presentation about our project on providing a serverless framework for Unicarnals, based on Unicarnals. So, at first, a small introduction about serverless computing. As you may already know, it's an event-driven execution model which allows microservices bundling containers to deploy it in a stateless function, either on the cloud, either on the ads, and as I said, most of the microservices are bundling containers, which allow the easiest management of the orchestration of the workloads, and we also know that containers also solve the problem of software delivery. It's very easy to manage, and they are able to do that without creating any more performance overhead or more memory management or memory footprint, but as we know, containers are based on the operating system level virtualization, and this means that all the workloads, all the containers, serve the same kernel, and this is a very important issue, especially for isolation, and in order to provide a solution for that issue, what happens now is that we usually deploy the containers inside virtual machines, and this creates some side effects because as we know, VMs do not have the same instantiation time as containers and they increase the footprint of the workload. So we see the emergence of lightweight virtual machines and lightweight virtual machines monitors, like KMO Micro VM or Five Crackers, in order to provide a solution to create a sandbox execution environment for containers without minimizing the bootstimes and the memory footprint. So our goal is to build a serverless framework for the cloud and the ads, and we base our solution on unicarnals. So the goal to do that is, what we want to do with that is to minimize the instantiation time, and we do that with a very lightweight hypervisor that we built. We minimize the attack service using the unicarnal functions, we bungling the functions inside unicarnals, and we provide an orchestration compatibility with existing popular container runtimes. So at first, how should the lightweight hypervisor look in the case of serverless? And the key points for such a hypervisor is to provide as fast as possible cold bootstimes. It has to provide strong isolation, for example using hardware extensions as KVM does. We, an idea could also be to minimize the mode switches when we have VM exits so we can have faster IO input and output, and also we can also minimize the ABI in order to strengthen and provide a smaller attack surface. And all of that has to be also suitable for edge devices that are also coming in service at this time. So let's take as an example a generic virtual machine IO request, for example a network packet. So we have the application that makes the network request. We have the packet as we know will travel from the application to the kernel space inside the guest, then the kernel from the network stack, then the kernel then will trap. This will create a VM exit which will make the KVM to wake up and manage this trap. The KVM doesn't know what to do with this trap and it will give the control to the virtual machine monitor which will therefore forward the packets to the kernel again in order to send the network request. So we propose heads which is a hypervisor for the heads and our main, what we usually mainly do is that we try to minimize the mode switch between the host between the kernel and the user space. So instead of the packets travel from the guest to the host, then from the host to the user space and back to the kernel space, we simply move the virtual machine monitor inside the kernel. And of course we have to do that in a very minimal way in order to keep it as simple as possible and as small as possible. So in order to do that, we also use Unicernals as guests which are able to provide stronger isolation. So in case you are not familiar with Unicernals, Unicernals are specialized single other space images which are simply constructed using library operating systems. A library operating system is a type of architecture of operating systems that libyify every component of any operating system. For example, a driver, a user space, the user space management and all of these things. So what happens in the Unicernals is that the application is linked against these libraries and it forms an image that consists only with the application code, its configuration, any runtime that it might need from the application, any library and of course the operating system components that glue all these things together. So with that design, Unicernals are able to have a very, very small memory footprint and they can achieve very, very fast good times, even the same as containers or even faster and they can provide strong security in two levels. At first, they benefit from the isolation that is provided from the hypervisor and secondly they have a very, very minimal attack surface, a minimal attack surface. On the other hand, as you might already know, Unicernals provided a completely different execution environment which makes porting existing applications in libraries very difficult. Also, there is no support for hardware acceleration and since they are not containers or very, very lightweight VMs with only one function, they also create some challenges regarding their orchestration. So in the case of hardware acceleration, the challenging thing is that the existing frameworks are very, very big and it's not easy to port them and also it's very difficult to access any hardware acceleration devices like ZPUs and FPTAs which require for example a driver to be ported inside the Unicernal. So what we do is we decouple the function code from its hardware specific implementation with VXL and VXL, as you might have seen yesterday, is a simple library that consists of a static and static or user defined API that interacts with the application, a glue code, the plug-ins that we see for example here on the bottom that it's just a glue code for the hardware implementation and the main component which is the VXLRT which is a multiplexer for the requests that come from the application and they are mapped to the respective hardware implementation. So it's more like a VM remote execution API. So in the case of orchestration, as we know, most of the frameworks that already exist are tailored for containers and what we need to do is to integrate the Unicernals in that model. So we bundle the Unicernal binary with all its dependencies in a container image and later we unbundle the Unicernal binary and then we spawn it from the container image. We also build from scratch urnc which is a Cata container based runtime which is able to spawn Unicernals and at last we also need the interface that will interact with the serverless gateway in order to invoke or get any metrics from the function. And for example, as you might, if any of you is familiar with OpenFuzz, there is a function called watchdog and what we do is that we port this functionality inside the Unicernals and this snippet will then call the function that is bundled inside the Unicernal. In this diagram, we see what we want to achieve, we want to allow the user to have the option to deploy their functions either in a simple container, as a sandbox container or even as a Unicernal from a unified framework like Cata containers using Kubernetes. So to sum up, we built a lightweight serverless framework and we base it on Unicernals, stripping down the virtual machine monitor and moving it inside the Linux kernel. We use Unicernals to bundle the functions that the user wants to deploy. We provide hardware acceleration, it's as far as we know, there is no other hardware acceleration support for Unicernals right now. And we also enable the deployment of Unicernals on the edge. Both our projects, Heads and Uransi, which are the main components, are working progress. VXL is more stable and you can easily try it out whenever you want. So I also need to say that this project is partially funded from Serano and 5G, a complete, which are Horizon 2020 projects. Thank you for your time and we'll be very happy to answer any questions you might have. How stripped down Linux kernel? As far as you like, but it's still very much inside, same memory manager and so on. Do you still have the separation of user space and kernel space in that? Unicernals? Yeah, that's how you do it now. That would be a nice Unicernal if you can make it actually. This is an open issue right now, like how we can have Linux compatibility in Unicernals, for example. And it would be, I think there are some kind of projects that they're trying to achieve that. And then it's more like just as, let's say, a bit of more overhead because of the mode switch between user space and kernel space, which is, is it really necessary if you just have only one application? Do you really need that? That's the notion. Do you have any benchmarks of real-world applications, how they are comparing Unicernals and containers? Yes, there have been quite a lot of papers, for example, Unicraft and Solofival, like Rampran, Unicraft, OSV, these are the most popular, let's say, frameworks for Unicernals. And they are able to achieve much faster network I.O. and much faster boot times compared to containers, instantiation time. I don't have any numbers right now, but I can point you to these papers, if you want, harder access. Sorry? Yes, but that would be an option, but if we have a pass-through, then we limit how many gates we can have, right? Like, we cannot serve the same resources easily with pass-through, but that's also a case, but you put it in the right migration, not even. You are dependent on it. Yes, exactly. Thank you again for your old time.