 Welcome, everyone. The talk will be in English, so I'll talk English as well. We're here for a little presentation about unicolonial security analysis. Harry here wrote his master thesis about the subject, so he definitely knows what he's talking about. I have no idea what unicolonial is, but he promised me that he will start with the basics and then dive deeper in the course of the hour. I'm hoping to learn something new, which is always good. Thank you, everyone, and please welcome Harry. Hello, everyone, and thank you for being interested in this talk. I see the room is full. This will be mainly about my master thesis at X41 and RWTH Aachen. Who am I? I'm studying electrical engineering and computer engineering at RWTH Aachen. I'm working at X41 as a working student, mainly doing pen testing and security research. I'm also a member of Karl's Computer Club Aachen for around four years now. I'm an official member since the GPN four years ago, so I'm connected to this event. What is my motivation for a master thesis about unicolonial security? When you open, for example, the Unicraft website, they state that Unicraft is a fast, secure, and open-source unicolonial development kit. So security is a major claim here, directly behind fast. Nine of the M's even goes a step further. They say that their unicornals have all the same security protections you would find in the Linux system and more. So they claim to be even more secure than Linux. And my master thesis was also about unicornals. And I realized that they are multiple papers on unicolonial performance, because everyone who writes a unicolonial is also doing benchmarks and so on. But there are only very few papers on unicolonial security independently analyzing the security claims made by the unicolonial teams. And none of those papers existing on the topic analyzes a unicolonial written in a memory-safe language. And none of them provides an overview of the security features of the most popular unicornals. So this is what I contribute with my thesis. What will I talk about today? First, I will give an introduction into unicornals. And I will give an introduction into operating system security. I guess most of you already know about the security things, but I also want the people being able to follow the talk who didn't write an exploit yet. Afterwards, I will present my results of analyzing Rusty Hermit, which is a unicolonal developed at my university. And this was the main part of my thesis. Then I will go into the analysis of other popular unicolonist systems, discuss my findings, and we'll draw a conclusion. So let's start with an introduction to unicorns. Many virtual machines are only used for a single purpose. So often there's a VM spin-up for a web server or a virtual machine, which is just a database server. So we have a lot of overhead and we have a lot of stuff in VMs. We don't need to fulfill the purpose the VM was created for. And also cloud computing is an emerging field for years now. So we finally need efficient and scalable systems to quickly spin up new systems if more users want to access a resource and so on. So what are unicornals and how can they solve this problem? Unicornals are based on a so-called library operating system and an application. And both is compiled together in one single purpose and single address base image. So we always create one unicornal for one purpose, which allows us together with a library operating system approach to only have this code in our unicornal, which we really need for our task. And because we have a single address base, we do not have context switches, we do not have system codes. System codes become just function codes, which makes it also much more efficient. Here's an example from a Mirage OS paper. As you can see in a normal VM, you have the hardware, the hypervisor, and then the operating systems kernel and a lot of stuff in between. And here the Mirage OS compiler puts everything together and compiles it into a specialized unicornal image. So we can get rid of a lot of stuff which is in between. And this also allows us to compile everything together and also to optimize everything at compile time, which makes it even more efficient. So what are the security claims made by unicornals? The main claim is the reduced attack surface because of the library operating system approach and the very minimalistic operating system. So code which is not there cannot be exploited. This is one of the main ideas behind unicornal security. And the second claim is that we have increased security because of the strong isolation. If we have one application and one unicornal for one purpose and they all run on top of a hypervisor, they are isolated from each other by the hypervisor. So now that we understood what unicornals are, we will go into Rusty Hermit. This is a research unicornal developed at my university and it targets high performance computing and cloud environments and is written from scratch in Rust. And it allows applications to be written in Rust, C, C++ or Fortran. This is not used in production, this is still under development and for research purposes. So let's go more into the security topic. I will now give an introduction into operating system security. As I already said, for people who already wrote memory corruption vulnerability exploits, this will be not that interesting I guess, but I want everyone to be able to follow the talk. So operating system security, it is relevant to understand the difference between security in terms of something that does not have any vulnerabilities and my operating system provides security features in order to protect the application which runs on top of my operating system. Because an operating system is a service provider providing access to hardware resources, scheduling and so on. But security is also a service. Our operating system cannot stop an application from having bugs but it can make exploiting those bugs harder by providing security features. As you can see in this slide from a Microsoft presentation around 70% of the CVEs fixed in Microsoft products between 2006 and 2018 are memory safety issues. This is why we will now concentrate on memory corruption vulnerabilities. Of course there are other classes of bugs but we will concentrate on this. I will now explain one of the most prominent memory corruption vulnerabilities. I will explain stack buffer overflows and how an attacker can exploit this. When a function A calls a function B then a stack frame is built up for that function. It is used to pass the function parameters which are stored on the stack. The problem is when our function B finishes execution the program flow has to go back to function A to continue directly after the function called to function B. This is done by storing this return address when doing the function call also on the stack. What is also stored on the stack are local variables. Here for example buffer with 8 bytes. And the next code line here reads content into that buffer but this is where our vulnerability is because we don't do any length check here. So an attacker can just write more than 8 bytes. What happens then on the stack is that we reserved 8 bytes in this buffer but an attacker writing more than 8 bytes can now write the things that are stored on the stack behind this buffer. An attacker can for example overwrite some local variables but an attacker can also use this to overwrite the return address with an address of their choice. So when an attacker now is doing this then after finishing function B the program flow does not go back to where it came from but it goes to the address the attacker wrote into the return address field on the stack. What an attacker can also do is write exploit code into that buffer and then write the address here and let it point to the buffer. So that after finishing function B the program flow jumps to the code written there by the attacker and so an attacker can take control of what is executed. And of course there are also much much more elaborate techniques for example return oriented programming and so on but we'll concentrate on this basic mechanisms here. So we have requirements for a successful attack. The first one is that an attacker can write and execute exploit code. The second one is that the attacker knows the target address where to jump to and the third one is that we have no integrity checks on the stack because if the application would be able to detect that it was exploited then it could just stop execution. And for each of these requirements there's a security mitigation provided by the operating system in order to hinder an attacker from successfully exploiting a vulnerable application. I will now talk about these security mitigations and give a quick overview if and how they are implemented in Rusty Hermit. So our first requirement was that the attacker can write and execute exploit code and the mitigation for this is called WX policy. The idea is to make memory segments either writable or executable but never both so that an attacker cannot just write exploit code and then execute it afterwards. In Rusty Hermit the stack and the heap are not executable how it should be but the code segment is writable. And of course it is executable by design because the code in the code segment has to be executed and this has the impact that an attacker is now able to exploit arbitrary write vulnerabilities or an attacker who is able to exploit arbitrary write vulnerabilities can now just override kernel code and execute it that way and take control over execution which is not how it should be. The second requirement was that the attacker knows the target address and the mitigation for this is called ASLR or address-based layout randomization. The idea is hence the name to randomize the addresses of application, kernel, libraries and so on so that the attacker just does not know the memory layout and does not know where to jump to when overwriting the return address. And this was a very quick analysis because ASLR is just not implemented in Rusty Hermit and as I said the impact is that the attacker knows the target address and can easily exploit vulnerabilities and does not have to use elaborate techniques to leak memory layout first. The third one is the integrity check thing. The mitigation here is called stack canneries. The idea is to place a random so called cannery value on the stack between our vulnerable buffer and the return address because as I said the local buffer when an overflow takes place this overflow is always linear. You can only linearly write memory when exploiting a buffer overflow. So the attacker first has to override the cannery value in order to reach the return address afterwards. And the program can just check the integrity of the stack cannery and if the stack cannery was changed then the program can detect it was exploited and can just stop execution. And in Rusty Hermit we have to do a case differentiation. For C applications the necessary library is not present in the build container so when following the official build process it was not possible to use stack canneries. For Rust this is kind of more complicated because the Rust compiler does not provide stack smashing protection by design maybe because they say they are memory safe by design which is like a little bit of a discussion as we will see later. And the impact here is that an attacker can just use buffer overflows to overflow and override the return address without leaking the cannery value first. I said there are three security mitigations but ASLR and stack canneries heavily rely on randomness. So if our random number generation and handling of these random numbers is flawed we have a serious impact on our security. So I decided to analyze random numbers as the fourth thing. And Rusty Hermit provides a pseudo random number generator which uses the recommended constants and is seated with a time value which is fine for a pseudo random number generator especially as it also provides a true random number generator which uses the rd run instruction which is also fine. So besides analyzing these four mentioned security features I did an in-depth analysis of Rusty Hermit and I will present you now some notable findings of course I cannot present every finding here. First let's talk about U-Hive. U-Hive is a custom hypervisor written for Rusty Hermit and remember the idea is to have one application per unicolonel to use the hypervisor for isolation but when the hypervisor provides unchecked access to all files on the host system then we don't have isolation anymore, right? So yeah, this is a problem. Reminder, this is not used in production, it is a research project but it was not documented and it is something that has a relevant security impact because if I say in my unicolonel hey please give me the host system's etc passwd file then your host system happily gives it to your unicolonel which is not how it should be. The next thing is when we are discussing memes rewrite it in Rust and everything will be memory safe, right? Yeah, well when you, for example, write network drivers you need Rust Unsafe code for memory access and for direct pointer derefination and so on and this is code from the network card from the network driver and we have out of bounds read vulnerability here because this is the part of the driver which takes the packets coming from the firmware from the network card firmware and handlets it in the operating system and we read a length value here and simply trust the length value we read from the packet without checking anything we even cut four bytes off the length because there's a CSE checksum at the end of the packet which does not belong to the packet's content and so we don't want to read it but the problem is when we here now take a slice from this memory, from this packet in memory and we just use the length value which was provided to us and don't check anything and something gives us a wrong length value maybe a too long one then we copy more bytes than our packet length really is and this is not that dramatic because the packet is coming from the network card so the network card would have to be compromised in order to use this so it doesn't have a real impact but it should be fixed in an in-depth approach and it is a good showcase why writing it in Rust doesn't make everything 100% secure and we can still have memory corruption vulnerabilities here the next thing is heap hardening I was sitting in front of the building like two days ago in the evening and we were discussing unicorns and someone was saying hey but nobody would write a heap allocation by themselves we have libraries for this but when you write your unicorn from scratch and write it in Rust then you might also write your own heap allocation algorithm so what happens here is we have a linked list of so-called holds so we have a free list which stores the free memory segments and we have a header at each of these free holds which has a size value and a next value the size value tells us how large this hole is and the next value just points to the next hole so what an allocator is doing when an application says hey I want to have some memory from the heap then it looks here is the size value large enough if it is not it follows the next pointer looks up the next size value and so on the allocator goes through this list but now consider an application which has a heap buffer overflow vulnerability what an attacker can now do is overwrite the heap buffer overflow it and then overwrite the header of the hole which is placed after the buffer so an attacker could just overwrite the size value and overwrite it with a larger value or smaller value which would then produce crashes or use after free vulnerabilities but what an attacker could also do is overwriting the next pointer and letting it point not to the next hole but to anywhere in the heap and overwrite whatever the attacker wants to overwrite or let it point to the kernel code this might be also possible and remember the kernel segment is writable in Rusty Hermit so we might be able to build a nice export chain here what now happens when the application asks for another heap on segment now the allocator goes through this list sees okay this size value is too small so it follows the next pointer and interprets whatever it points to as a whole so here the allocator interprets this area as a whole might see the size value fits because it points to code or whatever is there this might be relatively large so the possibility is relatively high that the allocator then gives the address of this back to the application and the application then can write whatever is here and if this is user-controlled input an attacker could just overwrite arbitrary memory addresses or might be even able to gain code execution this way a little bit more detail it is not 100% correct that this is an arbitrary write because there are some more restrictions but I didn't discuss them here in detail because of time reasons what an attacker is now able to do with this missing heap hardening is to change a linear buffer overflow into an arbitrary write vulnerability or even into code execution which is of course very bad so now let's discuss what I did and what my methods were first I created very simple generic test applications like just writing a function into the stack segment and trying to execute it this is a very simple test this is just a few lines of code and I did a lot of manually reviewing the code because all projects I reviewed were open source so I could look into the code of every of those projects and when I found something I built a simple proof of concept code showing that this is broken but I did not do fancy exploit chains also because the way from a simple proof of concept showing this is broken to a full working realistic exploit is still a very huge step and costs a lot of time and doesn't have much scientific value because we already showed with a simple proof of concept that it is broken and remember this is a master thesis so it should have some scientific values and not produce fancy exploits which are nice to demonstrate and all my code will be published on GitHub in the next day so you can reproduce my findings now a quick demo where I'll show you how it looks like on the left side you should see my terminal I will just run this test code and then I'll explain it to you on the right side you can see my terminal and this is all you have to do for Rusty Hermit in order that your application gets compiled into a unicolon what happens here is we have a hello function which doesn't do much it just prints out hello world in our main function we then print out this functions content which you can see here it is compiled opcodes of print line hello world and now we just print a quick info that we call the function and then we actually call the function this is what you can see here here's our quick info that we call the function and here is the hello world output from this function and the next step we overwrite the functions content with a lot of Cs and C3 this is just the compiled opcode for in three statements which we use for padding and C3 is just a simple return statement so what we are doing here is overwriting this print line hello world with a simple return so we are now printing our info and executing the function again nothing should happen and this is what we see here we successfully have overwritten the function it is now the return statement and here is our function call info and you see nothing so at work we were able to overwrite something in Rusty Hermit's code segment to execute it afterwards so this proves that the code segment is writable and executable now switching back to my slides now let's go into the analysis of popular unicolon systems I analyzed the four explained aspects as LR, WX, stack canneries and random numbers not only for Rusty Hermit but also for OSV, NANOS, Unicraft and MiniOS as these are the most popular unicolon systems and they are all still maintained so some of them already fixed things I reported or are working on it what I find interesting here is that none of the unicolon systems has all the check marks but NANOS is pretty close they already fixed vulnerability I reported to them which led to the bad rating here and what I also find interesting is that only one of the unicolonists implemented ASLR at all so I guess this is because it is their biggest effort of those four security measures to implement ASLR and this is the reason why only one system implemented it if you have any questions to one of these unicolonists just ask me after the talk I can of course not present every detail here but I want to present some notable things because one thing I noted when analyzing unicolonists is random number generation seems to be hard so if you have nothing to do in the evening just review random software and drink a chunk every time someone does not properly see the random number generator it will be good for the chunk bar and maybe not that good for your liver but yeah, handling random numbers seems to be hard so let's start with Unicraft Unicraft is highly configurable so it uses custom C libraries for example LibUKSP for stack smashing protection or LibUKSP run for random number generation and you can configure everything you can configure if you want to use static canneries or if you want to use a zero-terminated cannery or if you want to use a random cannery and the same for the random number generation itself you can configure which random number generator you want to use you can configure if you want to use a static seed a time-based seed, a random seed and so on so Unicraft provides secure options here because if you use a random cannery which is generated with a random number generator which was properly seeded, this is pretty secure but the default value is not always the most secure one and we all know users, users often just use the default configuration so it definitely provides secure configurations here but they should be the default from my point of view now I think every one of you who did some pentests knows this you're saying, hey, I will not spend any more time into finding new issues because I have no time anymore I have to document my things and then when copying over the last stack dump into the documentation you see, oh, this looks weird and so it was with the nanos canneries because I noticed that they don't look that random and what I did then is I built a very simple application which just defines a variable on the stack and prints out what is directly behind this variable on the stack and what directly is lying behind this variable is our stack cannery so this application just prints a stack cannery and then I compiled it into a nanos unicolonel and I run it for 50 times and had a Python script which just collects the cannery values and sorts them and then does a simple statistics and if you see here, this is not that random 50 times and only 50 runs and only four different cannery values when running this... and what I want to say here is I disclosed the issue and the proof of concept via email and I did not even give them a full root cause analysis because it was like the evening before I wanted to hand in my master thesis and so it was more like a half night shift producing this and writing a proof of concept but they patched it in less than two hours after I reported it which is really quick and they also explained the root cause to me via email and said that it was because their random numbers were not properly passed to the application via the 80 random vector and sent me the link to the github patch so I can just reproduce it that it works now and they also gave me a public shoutout via Twitter, shared a blog post about this so dear software vendors, this is how you should react when someone reports a vulnerability to you and this is not very often the case that a company reacts so you are great when reporting security vulnerabilities to them now let's talk about OSV OSV uses the FNOSDAC Protector Flack which explicitly disables stack canneries in their unicolonel they have a libc shipped with OSV but they also implemented their own stack check guard which has a static cannery and remember the compiler generates the stack checking code but the operating system is responsible for placing the cannery value and that is correct and that is random that is placed on the correct place so that the compiler generated stack checking code can use this cannery something is broken there when explicitly enabling stack checking protection so I'm also not sure why they have the FNOSDAC Protector Flack but implemented some kind of stack checking code anyways if you enable it something is broken and the cannery value is always 0 interestingly this might make it more secure because some input functions stop when reading a null byte and when you have a static cannery which has no null byte then it might be more complicated for an attacker writing a null byte than writing that static cannery without a null byte so being broken might make it more secure here I found also an interesting finding Minio S is part of the Xen project it is developed for download disaggregation and it is a starting point for multiple unicolonial projects so this is not really used as a unicolonial in production but many unicolonial projects are based on this and there are some scientific papers about unicolonials which use this as a starting point for implementing something that works with unicolonials so it was also an interesting target for analysis and they don't have a real random number generator but they have some test code where they have this to be fair they say it should be random enough for our users but this makes me looking more detailed into it because if they say this should be random enough for what they do with it it doesn't seem to be that random and some users who develop a unicolonial might still use this because they see hey there is some code how to generate random numbers I need random numbers I want to build a unicolonial on top of this so I will use this code and what the problem here is we get the time of the day and then take the seconds which is between 0 and 59 and take the microseconds which is between 0 and 999 add them and multiply them with a run mix which is just a very large constant but multiplying it with this just spreads the number of possible values to a wider range but it does not produce more possible values so after some iterations we have a lot of possible random numbers but especially after the first iteration we have a very low entropy of possible random numbers here so this might be simply brute-forced by an attacker and what you should also note here is we have an if-and-if half-lipsy here so if there is a lipsy present then they don't use their custom random implementation but they use the one which is provided by a lipsy and they later call it without calling the S-Round function also and when we look into the Linux menu then it says that the S-Round function sets its argument as the seed for a new sequence of random integers to be returned by RAND and that this sequence is repeatable by calling S-Round with the same seed value and if no seed value is provided the RAND function is automatically seeded with a value of 1 so if you don't call S-Round and just call RAND without seeding it then you always get the same reproducible random numbers and this might be of course used by an attacker here to get these random numbers now I want to discuss some of my findings first thing is using Rust for Unicorns as we've seen or as I've explained man-recorruption vulnerabilities can only happen in unsafe code in Rust there's a paper which proves that SAFE Rust is really safe so that is something we can trust and it massively reduces the effort for a code audit because with this I only had to review the unsafe code lines when looking for memory corruption vulnerabilities and I was able to review every single line of unsafe code in Rusty Hermit and when there's a Unicorn written in C it is much more time effort to review every single line of code because here you only have to review the unsafe code but we can still have out of bounds for example out of bounds read vulnerabilities as we have seen so this does not solve every problem here especially as the application might be written in unsafe language remember for Rusty Hermit we can still write our application in C so writing the Unicorn in Rust is a great approach but it of course does not solve every problem now let's discuss the security claims made by Unicorns I presented in the beginning of my presentation the first one is the reduced attack surface due to the minimal operating system this is of course true because as I said if there's less code, less code can be exploited but the application might be arbitrarily complex and have a huge attack surface and the focus on light whiteness the Unicorn developers have seem lead to leaving out security features to be more light white or because there's a larger priority on performance than on security which does not improve security the second claim is the strong isolation but as we have seen the hypervisor might also be vulnerable and independently from this the protection, it is a protection against lateral movement but not against the initial exploit so if one Unicorn is exploited and we have a hypervisor which provides proper isolation then it hinders the attacker from exploiting the next Unicorn and the next Unicorn and so on or from exploiting the host system but it is no protection against the first Unicorn getting exploited so it is a very good additional protection separating different applications in different Unicorns isolated by hypervisor but it is not a replacement for the security mitigations like ASLR and so on the next security claim or not really security claim but the next security relevant aspect of Unicorns is the single address space because of the single address space there is no escalation of privilege necessary anymore so if we have pwned a Unicorn we pwned it and we don't need escalating to admin or system or whatever so the attacker instantly has full control over everything in that Unicorn over the network stack, the hypercall interface and you can do further things with all the capabilities without needing further exploits and this is especially why these general security features are so important so that a Unicorn does not get exploited at all so what is my conclusion of all of this first, writing a secure Unicorn in terms of it has no vulnerabilities is great but it is not enough because the kernel developer has no control how insecure the applications are that users write for this Unicorn so even if I have a highly secure Unicorn someone might write application which is easily exploitable but in a Unicorn the application and the kernel is the same one thing so Unicorn developers always have to keep the application in mind and provide security features for the application to hinder an attacker from exploiting a vulnerable application but as I have shown many security features are not implemented at all or fundamentally flawed for various Unicorns so this is it from the presentation side thank you everyone for being here especially thanks to my X41 supervisor Eric for making this possible and thanks to my university supervisors Stefan and Jonathan it is not always the case that people are so helpful when analyzing their code for security vulnerabilities and we have some blog posts about this so if you want to have more details or want to see the proof of concepts visit our blog posts I will also publish a blog post next week which wraps up everything or you can directly contact me under one of those two email addresses or we have a lot of time left over now so if there are any questions I'd be happy to answer them thank you very much Harry that was actually really interesting even though I missed half the talk and I was at the door and I had to send at least 20 people away who had no room anymore so that's really interesting so we still have 15 minutes for questions I have two hands at the same time just decide who's first okay I first have a smaller remark and then a question remark is I think stackinaries usually include the zero byte at the beginning anyway so okay not always then my question is you said that Rusty Hermit does not set its executable segments to be unrightable was this like an oversight or part of its design and if so why? just nobody did it because it is very simple for the stack and the heap because you have just set the flags and the page table but you can directly do this for the code segment because first you have to map the code there and you have to do it afterwards so it is a little bit more effort and just nobody did this but the developers directly said hey look at this and there was already an open issue so they already know that they have to do it but just nobody did it yet okay thank you thank you for the also the strong hint that IO access is still a bit of an issue with the unsafety and my question would be because I am very much interested in resource access controls how do you feel about approaches like the athesios project or have you even looked into that which wants to push that responsibility to the compiler? which project? athesios I hear the name but we can talk about it afterwards sounds interesting you showed some bad random number generation what are your criteria for good random number generation other than seeding? best thing is to use a hardware random number generator of course or use the rd-run instruction some CPUs provide access to it and you can also pass the values for example and it is always the best to use hardware random numbers but if you need a very fast random number generator you can also use a pseudo random number generator but then you have to properly seed it best thing is to seed it with a true random number from a hardware random number generator and then if you have a proper algorithm for true random number for pseudo random number generation this is also fine for example chacha or salsa are some algorithms used for this ok thanks you told us that there were many unikernels which did not have any or not all of the memory defense mechanisms implemented like ASL and stuff can you tell what exactly is the reason for that? is it because of performance reasons? is it because of laziness or is it just on the to-do list or whatever and why exactly didn't they implement it? I mean many of those all of those projects are open source projects so you know often or sometimes by a single developer and then he has just too many open tasks for example for minios I wrote email to the main developer and asked so hey you have none of those mechanisms properly implemented other than WX policy he said yeah patch is always welcome so a classical open source project answer other things have more priority and but for example one of the Unicraft guys tweeted to me I think yesterday in the evening yeah we are working on it and now we fix this so I think often just the people did not do it yet they don't have a high priority but my problem with this is that many unicorns claim to be more secure and a lot of unicorn papers you can read yeah they are secure but yeah often it is not and everyone says yeah okay there is just a very very small piece of code so there is not exportable but they seem to oversee that the application might be exportable so I think it motivates the people to see the problem and to implement those security features and to put it higher on their priority list this is I think it is a priority problem mainly so it is not on purpose that was a question thanks so thank you for the talk I was wondering about the last part you made with that everything is in the same address space and that we don't need any privilege escalation couldn't we or like is it the best idea to have everything or like the privilege separation in the unicernel or couldn't we push that to the hypervisor that like the complete I don't know VM just has the privileges that are really necessary for the application because then we don't really have this problem I guess yeah that's right but as I said isolation by the hypervisor is only an additional thing and if you exploit the unicernel you also have access to the network stack if it has a network stack so you can use this for example to craft raw network packets and try to exploit something over the network and so on and if you would have more separation inside the unicernel then you would not directly have access to everything which you could then use to exploit further systems but to answer your question it does not make sense to make this separation inside the unicernel because this is a concept of unicernel to not have this separation but then you have to implement all the other security features in order to stop exploitation any more questions we still have 10 minutes all right so again thank you for the talk my question would be assume that these other unicernels you mentioned are actually used in production environments and my question would be do you know of any cases where these vulnerabilities were actually exploited in production environments as I know most of the unicernels are not used in production I'm not sure the only way I know that they are used in production is NANOS they can buy VMs running on these unicernels and I don't know of any successful exploits there since we still have a few minutes could you maybe sketch a bit how a typical deployment would work with one of these unicernels like I can imagine you run like a bare metal hypervisor and then how does the hypervisor actually get the unicernel to run and how would it execute it and so on that was a bit missing to get a full understanding for me of that model yeah we can maybe go back here so it directly runs on the hypervisor as a normal virtual operating system would do for example when I switch to the demo then you can see that this is just here it is just run on top of QEMO so it's a simple simple running on QEMO here with this configuration and when you switch to the code here you just have to import this for rasterim you just have to import this crate for example and then it gets compiled into image like you would run a virtual machine image on QEMO or whatever your hypervisor is and as a user you don't have to do much there just compile it with the tools provided by the unicernels and then you get a runnable image which you then can run on top of QEMO it's actually providing the hypervisor what do you mean with providing the hypervisor? this one is running on my laptop with QEMO but for example many ways can directly run on Xen so you have DOM0 there which controls the unicernels running but there are very different concepts for example rusty hermit can also run bare metal so it's different for a lot of unicernels and there's also hermit core which were before rusty hermit which could run in parallel so there are for example two cores given to the unicernel and then the unicernel can run in parallel to your linux so there's a very wide range of concepts how to run these unicernels on top of the different platforms that's also a bit how I understood it but I wasn't really sure comparing everything in one system or in one physical machine eventually was a bit hard conceptually thank you all right if there are no more questions then I say once again thank you and final round of applause for this cool talk