 Firstly, I'd like to say thank you for allowing me to present today. My name is Alexander, I'm Alex, and today I'll be talking about just in time Python, FAS, functions as a service platforms with Unicraft, and the premise of this talk essentially is, actually a little bit about me, I'm a PhD student at Lancash University was just over the channel in the UK, co-founder and maintainer of an open source project Unicraft. Occasional Twitter, so I use Twitter, I'm also on GitHub naturally and professional, so I have LinkedIn. But the premise of today's talk and to get into how Python is involved in everything here and how functions as a service is developing into the 21st century and how things are looking in today is that higher cluster utilization and with decreased operational expenditure. So basically, to break this down, you know, we want to spend less to do more. We want to run, you know, our microservices in the cloud, we want that to be cheap, we want to be able to run more of those same services on maybe less hardware. We want to use less physical resources, less virtual resources. And so, and this is I think a pretty common theme, I think we can all sort of agree on this idea and this topic and we can try and find and see a lot of things that are moving in that direction. In sort of the developer space. So that's kind of the premise of today's talk and I'm going to go into how we can achieve this with one strategy for the cloud, which is today's talk. So let's talk about the bloke problem that applications have in the cloud. And the problem is that you have your application that you want to put into the cloud or you want to run an edge service or something. And it's comprised of your Python app, but it's also comprised of the Python runtime, the container runtime, if you're running things with containers, it could also just be the process manager of the operating system. And it can't quite see it, but it does say operating system underneath. But yeah, so you have a very big stack actually. And if you've built an application and you're using libraries, whatever, you usually have a good idea of what are in those libraries. Or at least the functionality that's going to be occurring in your application, which means that there are actually quite a lot of things in the typical deployment that aren't actually being used. Right? For example, you might never actually use SSL in your application. Not that you should, you probably should, but you get the premise you might not be doing an encryption type function within the application that you're building. It applies to any other type of operation, but encryption or all these other sort of standard applications or libraries that you might be aware of or might want to use would still be available in most deployment scenarios. And they're typically also very much available within the kernel, within the operating system, right? The shared libraries are almost always there. And the kernel almost always has functionality that's part of that system that you just can't really remove. So when you look of your actual application, it's doing what you want. Great, fantastic. But it's not doing other things, which it could do, but there are libraries there that are on being unused. And this is actually part of this premise, right? This is a problem with the cloud is that you want to get more use out of the hardware and the virtual resources that you're trying to run, but you're actually packaging in other things that you don't need. So here is an example of how we can try and remove these things. So today I'm going to talk about how we can do that in one way to approach this problem. So this is where I get to introduce you Unicernals. This is something that I've been researching for a while, something that I work on pretty much every single day. It's an amazing sort of technology that I really believe in, and I very honoured today to be able to introduce you to the topic. Maybe you've heard of it before, but if you haven't, I'm very honoured to be able to tell you about it. And so the Unicernal model is a sort of idea of looking at the full stack of your system, right? So if you think about your application at the top, this application has third-party libraries that you are aware of, you know, Python, you're importing things. Those are those libraries, but there are also libraries within the operating system itself. These are shared objects, so like lib as a cell or anything else. And then you have the monolithic kernel underneath that facilitates the runtime of everything else, including your application, but also maybe other applications or other runtimes, etc. And then this finally then sits on top of a platform, so this is like a hypervisor, or if it's deployed in the clouds, or it could be XAN, it could be KVM. And then that finally sits on top of hardware. So we look at this whole model, and we break it down, because when we break it down and we introduce this idea of a library operating system, we can then start to pick and choose exactly what we want to run into. And this is what a Unicernal is. So you take all the different components that you do need, and not the ones that you don't need, right? And through a process of compiling it, linking, etc., through this build process, you get a Unicernal. And a Unicernal is this bespoke image, is a bespoke in the context of a cloud, is a virtual machine, and you can also deploy it as an edge. You can put it on a Raspberry Pi, for example, and then that Raspberry Pi would only ever do that one thing. But it only has the application-specific libraries, the kernel-specific libraries. It's targeted directly for that platform, and it has only the code that's necessary to run on that particular piece of hardware. Which is the case that actually most kernels might usually get architecture-specific kernel binary images, but usually they have additional code in there to allow for different platform runtimes. Okay, so this is the kind of model that Unicraft offers. And so maybe go through some of the key characteristics of Unicernals. So there are a form of compile-time specialisation, meaning that you can think of it like through the DevOps pipeline, where you have your application and you're bundling it with PIP and you're getting ever all the dependencies. Finally, you're building it and the output is the final kernel image, right? You're not taking that image and then shipping it and then putting on top of something else. It is the final artefact in the sense that it is going to be running as a virtual machine. They're very lightweight. I'm going to get into a little bit later about how lightweight they are using comparisons. I have a unique property of having a single shared address space. So in most monolithic kernels, user space and kernel space are two different spaces that result in checks for privileges. Do I have the permission to read this file, for example? This is a check because they are part of two different memory address spaces. In the context of the Unicernal, this is one of the same because you know that your application is wanting to read that file, for example, or is wanting to perform this particular type of operation. It is allowed to do that because it has been bundled together. So there are no syscalls. A syscall is open, a socket, a close, etc. These are usually a barrier, a check that performs, I think it's around 300 CPU cycles, of just like can this user space program do exactly, yes, okay, can it, not, whatever, it returns an error or doesn't, and then continues. Because this syscall is now a function call, it's like four CPU cycles. It just hops to the functionality that it needs to go to. It has no other functionality that is not necessary, right? And there are no daemons that are running in the background, no system libraries, and there's no SSH, for example. So a big attack vector for a lot of virtual machines is that SSH is just left open, for example. So these are some of the characteristics of Unicernals, naturally platform, hardware specific. Unicraft, which I'd like to talk to you today about, is a library operating system and a Unicernal development kit. It's open source. You can find it at Unicraft.org. We're also on GitHub. Please do check it out. And we have a lot of CLI tooling that is written in Python, so hopefully it can fit in the theme of what this conference is all about. And also so you know, we are a Linux foundation project and a Zen incubator project. So we have quite a good sitting in the community as well. To talk about the community, Unicraft has a foot in a lot of academic institutes. So we do publications once in a while that do a lot of measurements and explore certain properties within the library operating system model. And some of these are the partners as well, the university partners, where you can see the methodology and like why we're approaching library operating systems in the way that we are. And I think every time that we've had a paper that has had sort of experiments in it, those experiments are also open source. So you can check them out on GitHub. You can run the same experiments and see how they're formed and why we're getting, when we do comparisons, the same or different or better. A performance of security, et cetera. We have quite a big active community of contributors and sort of constantly on a day-to-day basis we're like seeing new contributions and sometimes people add amazing new features out of the blue. So it's really amazing to watch. And we've been sort of steadily growing over time. So if you could start us on GitHub, this would make this chart better, please. Yeah, okay. We also have a very good Discord. Community is very active and there's also a lot of conceptual channels. So if you're into operating system stuff, there's a lot of students on there and professors are also mentoring who have a lot of experience. And so we talk about sort of how best to implement, for example, a particular internal library, a lot of stuff going on at the moment, for example, S&P, what are good abstractions for S&P that are platform independent. It's really amazing to watch, but this is our Discord channel. Okay, so the next few sets of slides are basically to do with like to show how Unicraft compares to other existing technologies. So Unicraft has provided better performance compared to other implementations of Unicrinal projects. There are several others that exist that we are aware of that are basically, they also, you know, they have their own unique ways to approach the problem of the library operating system. But this graphic also shows a comparison against Linux and Linux. So this is bare metal Linux, just sort of a user space. In this case, it's Nginx that we're running on Linux and then we just perform a throughput and a latency test on Nginx. So we run it on Linux, then we run it inside a virtual machine and we also try different implementations of platforms. We also have Firecracker on there. When it comes to memory consumption and storage, because you're only using the minimum amount of resources that are actually necessary to run your application, we find that there are much less, you know, we can use a lot less resources to run the same application. And so compared to Docker and compared to Linux as a micro-VM, right, you have to actually use, we really tried to squeeze it, right. We tried to make the minimum amount of resources possible. So we really deleted things with inside Linux, for example, to try and make sure that we weren't trying to make the experiment as fair as possible. And because it's such a well-defined, well-compiled source and the resulting artifact is very minimal and very lean, we can sort of get quite good memory consumption. And here we do a comparison. You can just see if you pull the official Docker image for Python, the same version as well. The official one is like 335 megabytes. And then trying to be fair, I pulled the Alpine version for the latest. There's 27 megabytes, but we have, when you compile Python, sort of the Python interpreter, and then you compile that against Unicraft, the final image, including the file system that contains the same Hello World programme and all of the standard libraries that are inside of Python. The final result is 5.2 megabytes. So part of our team really love diving into trying to make things performance. And this is the latest result that they were able to get. This was NGINX, but they were able to show P99.99, like latency, which is absolutely crazy. I've never seen it before, comparing Unicraft with Linux and to show that this is sort of... I don't think the lines are showing up, but it's like between 5 and 2 milliseconds is the difference between Linux and Unicraft. Comparing virtual machine monitors, and so Camu Solo 5, which is sort of historically a Unicurnal virtual machine monitor, recently Firecracker as well, comparing the boot time of a guest image that it comes from. And this is just Unicraft. We've seen as low as 3.1 milliseconds booting the virtual machine image on these virtual machine monitors, or via these virtual machine monitors. So if you tune your Unicurnal to a specific way, you can really exploit, for example, not having to initialise so much memory, you can do it statically if you know you're only going to use sort of much amount of memory, which means that you don't have to preemptively do things. And because there's no system D, for example, there's no initialisation of additional services, it boots straight to main. Okay, so because we're a library operating system, and because there are a lot of libraries within the library operating system, there's a lot of optimisations that you can tweak and do throughout the place. So you can choose different settings within different libraries to exploit, for example, the use case that you're trying to use for your application. And a couple of, like, default high-level optimisation techniques you can run on Unicraft, or link time optimisation, dead code elimination, dead code elimination, and even combining them. And you can see that we can even reduce the image size further. So which means if you're, like, transporting the image across a data centre, it's much less throughput on the wire. Because you're not bundling a lot of additional services, whether it's in the Docker container or the virtual machine, because it's a Unicraft and you don't have any of these other stuff, you have a much smaller reduced attack surface. Because there's no shell, because there's no concept of a user, there's no, like, GID, UID, etc. There's no background processes, there are no additional ports that you wouldn't be aware of. Because there are none of this, the attack surface is much smaller. But because you're dealing directly with the kernel and your application is sitting on top and you're tightly coupling the two, and because they're bundled together, you can exploit a lot of additional security properties. For example, address-based layer randomisation, arguably as well, because if you do use virtual machines and you do run things as a virtual machine, you're exploiting the lowest level of virtualisation. So this is sort of deemed the most secure privilege. Usually when you see deployments in the cloud, containers are usually deployed on top of a virtual machine anyway. And so the virtual machine still sort of represents the unit of security that is sort of experienced within infrastructure service providers. As a project we also have a whole bunch of other security implementations that are being worked on at the moment. So this is a sort of a listed example. It's actually a little bit out of date, I just took a screenshot from our documentation. Several of these have made their way to become upstream. And we're constantly seeing new different ways to increase the security of the project basically. So what is Unicraft support? Naturally Python actually was one of the very first applications that we got running on Unicraft. But we support a whole bunch of other languages and libraries and applications. And actually this list is not exhaustive because we're binary compatible. So if you have a binary compatible, if you have built something in Linux that is a binary, you can actually just load that into Unicraft and it will do jump call between instructions. So this is how it sort of talks to the kernel. We also support a number of different libsies. So we have a built-in libc called noLibc which is like the minimal amount of libc we found to get a kernel running. And then we support out-of-the-box Zen and KVM and then as well as Arch. I'm actually arm these are the sort of the built-in. If you clone the repo, this is what you get out of default. This is what you can configure. But as a sort of larger ecosystem, we support NewLib. We have arm out-of-the-box, but muscle. We're going to have muscle as a libc. This should be, I think, in September. So in about two months, we'll have muscle support. And this basically opens the door for a lot of things for us because it's a lot simpler and nicer implementation, at least in our opinion. And then we have ongoing support for the busy being built at the moment, Hyper-V and RISC-5. So you can see here, we have some screenshots of Unicraft booting and running on Hyper-V on VMware. And then out-of-the-blue open-source contributor came out and was like, hey, I made RISC-5. It was just like, awesome. Okay, cool. So that's that. And you can run it, of course, on a Raspberry Pi and other. You just build this as an abstract interface you can build against any hardware platform. So with regard to integration, so we have a number of different toolings and ways that you might think, okay, look, it's a kernel. How do I work with a kernel? This is out of the breadth of my gamut, et cetera. But we have tools and services that can make things much easier so that it doesn't seem so scary. I know that when I started, I was like, oh my God, what am I doing as a kernel? So we have, first of all, VS Code, VS Code integration. And basically here, you can already search for it on the VS Code marketplace, where you then sort of load it up and when you're building a project, you can then see on the side the libraries that you might want to add to your kernel, to your unique kernel, right? So on the explorer side here, it will load up and show all the different libraries that have been loaded. And these are like third-party libraries that you might necessarily need. So if you're importing something from PIP, and it says, maybe you've done PIP before, it's like, oh, I'm missing this thing, so you do apt-get install, whatever dev, headers, this is the equivalent, but through VS Code. I'll skip through this due to time. So our main tool is called Kraft. This is written in Python. Not only is it a CLI tool, but it's also like an API, so you can import it and you can build Unicernals sort of programmatically. It allows you to manage multiple libraries from different sources, whether they're getRepos, et cetera, manage the versions, et cetera. You define your Unicernal through a specification file, a YAML file, that just says, look, I want to target x86 on KVM, and these are the libraries that I need. Most likely you'll need newlib, Lwip. These are sort of standard libraries for Lipsy and networking. And then you just call, configure, and you do menu config. In fact, I can even quickly show you with a short demo. Here I have a Python application. It's a Kraft file that you see here, and then I have a file system that I've made through virtual AMP. So if I tree FS0, right then to this, you'll just see that you'll have Activate, et cetera, you have the pip binary. Here I have my Hello World Pi program, and then sort of all the standard things that have come through pip, et cetera. And then I can do Kraft menu config. This is when Kraft's installed, and then it opens up something like this. If you've ever built a Unicernal, it looks a little bit similar. It's a very simple sort of terminal user interface. And then here I can just sort of pick the libraries that I want. And down here I can even go down and customize the Python 3 build itself. So I can choose whichever extensions I would like. And then it's just a case of Kraft build. Or maybe I should show you what the Kraft file looks like. So here I just sort of specify config options, sort of things I would like. If you follow the tutorial, it's all out of the blue, out of the repo on the readme, you'll be able to see it's just a couple of installs, or like CLI calls. And then this is all there already. I've just kind of done a fresh clone. You do Kraft build. Or maybe I should do it fast. You'd never saw that. That's because I did control C. And then it just sort of zips through and just compiles all the stuff. So these are all the libraries that I needed to build Python 3 as a unicranol. And then I should pop a binary. And then a binary you should be able to boot as a virtual machine. So here it's just linking, linking the extensions. Can you guess it's like a shell script that is part of the install. If you do pip3 install on the Kraft repo, it will be installed as well because it's part of the setup tools. Then I'm just pointing to the kernel. So dash k, I'm passing in a file system. It's like if you've ever done docker run and then dash v, dash e is the same. I'm giving it 128 megs of RAM. And then I'm saying hello world is my mate, like my first positional argument is like hello world.py. So if I, well that was boot. So now it's booting from a ROM, there's unicranol and this is hello world. So I can then just change this for example, hello world. Or if I do the same thing, you'll see it there. Yay, okay. So yeah, so this is just the installation of dependencies that you need to get everything to run. This is part of the tutorial on the documentation. And there's just pip3 to get this as sort of the repo where Kraft exists. I'll skip through this. So we also have Kubernetes integration. It's something I wanted to get into. This is what this functions as a service comes to. And I'd like to talk about another one of these sort of problems with the number of abstractions that you see in typical deployment. So this is a typical function as a service deployment with openfaz where you have and with Kubernetes through faznetes. And so you have the system, two system pods that are mingling on top of container D. Then you have a faz namespace that's mingling on container D. And then you have fazD which is a binary that invokes the different functions that you would like to run. And so here, this is how the request comes in and sort of jumps through all these different services. And the problem here is that we have all these levels of virtualisation. You're sitting at the hypervisor, you've then got the OS on top of that, the OS is managing all the... It's not just managing container D, fazD is also managing SSHD, etc. Container D, fazD, whatever, they're managing things on top of this. And then inside of your service, ultimately you might even be managing more stuff. So there's lots of levels of virtualisation here, lots of abstraction, lots of slowness, right? The performance might not be so good because you're sort of going all the way up. So if you can run something as a virtual machine on top of the hypervisor, right? You're going to be using a lot less resource. So if you close it to the metal, you're going to be providing better service, quality of service. I'll skip this if it lets me. Yeah, okay, cool. Prometheus, I wanted to show that we, as a kernel, we have integration with Prometheus so you can see the runtime of your services. Building this as a library as part of the deployment. And then debugability. As a kernel, you might think, okay, wow, okay, how do you debug a kernel? Well, you can do printf just like any other programming language inside of the kernel. So there's some nice debugging libraries that you can sort of enable. So in that menu config that I showed you before in the terminal, you just go to the debug stuff and enable all the debugging stuff and then you start to see lots of verbose messaging as you boot stuff. And you can even enable unit testing, which we have inside of the kernel. And you can also do profiling. So we have a number of tools that allow you to see the runtime of your application on top of the runtime of the kernel. So you can see which system calls are being run on there as well. Yeah, I think that's about it. This is the Unicraft project. Thank you so much for listening. I'll take any questions. I was told to say that the mic is over here if you have any questions. Thank you. That was a really great talk and a really great tool. So my question is, because I wasn't the first part of the talk, but my question is, so is there actually a difference between user space and kernel space as in usual OS? No, there's no separation. So this means everything runs in kernel space, right? That's right, yes. Okay, so how do you leverage things like ASLR? Are you just basically putting things randomly in memory? I mean, as it's usual with your ASLR? So with ASLR, it's done at compile time, where we understand the application. In this case, it would be the Python 3 interpreter and the rest of the kernel, and this is where it occurred. So is there some actual ASLR that is going on when you boot the application? Or is it like once it's compiled, whenever you deploy it, the other systems are all the same? So it's a really good question. This is current ongoing. It's best to have a conversation offline about this. Cool, thanks. Hi. I've seen you mentioned other new girls. I wondered if you compared it to Gromine, which used to be Gromine. Oh, GralVM? Gromine, no, we haven't. And if you have ever considered using, well, for security purposes, the SGX enclaves. Yes, so we have a Google summer of code student working on enclaves right now. Cool, yeah. Hello, a great talk, very interesting. So in the demo you showed that you were editing the Python file and running without the company. Yes. In which case do you need to recompile the... So you don't have to recompile the kernel to get... So what I did there was I have 9PFS. 9PFS is sort of a network protocol that then talks to the host. And so it basically it read through a network protocol a mounted volume that was the file system. There are other ways to make that file system available to the kernel. For example through initRAM. And so I could package that file system that you saw that had the helloworld.py program as an initRAM file. I would then boot that as the initRAM. But there are other ways to do it. You could also package the initRAM with the kernel into like a QCOW2 image, or actually we have ongoing work to make this available as OCI archive. So it's compatible with the rest of the sort of Kubernetes and container space. And we have a tool that makes this possible where it looks like a container. It feels like a container, but actually it's a unique kernel. And we'll be releasing more of these tools in September open source. All right, thanks. All right, thank you. Thank you so much. Thank you.