 So thanks for joining this micro-kernel-devroom today. I'm Noman Feske from Genote Labs. And I want to go into a pretty technical topic today about the work that has been conducted over the course of the last year, which was bringing Genote OS framework to a state where we achieve binary compatibility between different kernels. And I want to explain to you how this works, but also the motivation behind it. So for the talk, I first want to motivate why we are actually pursuing this kind of multi-kernel framework. Why don't we just stick with one kernel and optimize our operating system for one particular kernel? So what's the benefit that we think lies in this kernel diversity? Ah, OK, welcome. So the next part will be about the kind of key realizations from our side that allow us to do this, to be independent from the kernel, but still leverage their different features. And those kind of challenges are related to these areas over here. I will go into each of those areas. And then I want to give you in this third part more insight into this kind of more recent work from going from a uniform application programming interface to a uniform application binary interface. And given this new perspective, we can look into the future of what becomes possible with this. So first, why do we really want to have multiple kernels? So to be honest, this was not a plan that we laid out. In the beginning, it was more like a necessity. So when we started the project, not when we started the project, but before the project was started, in 2003, security suddenly became into the focus of the L4 microkernel community. Before that, the L4 community was more focused on experimentation, how to get L4 really fast, and host Linux on top of the L4 kernel, and also concentrating on real time. But security was not the main focus. But in 2003, this changed. And in particular, this changed because of a kind of visit of one of these capability security experts in the field. Jonathan Shapiro visited the L4 community and gave us talk about capabilities and about the benefit of the local names, opened our eyes about storage channels, and so on. So we immediately grasped the benefits of the capability based security concept. And this kind of inspired a bunch of new kernels that have been developed since then, a new generation of kernels, actually. And GNOT started as the designated user length for one of these kernels, the NOVA kernel. So the NOVA kernel was the ideal combined microkernel L4 philosophy with virtualization, with capability based security into one kernel, and GNOT was meant to be the user length for this kernel. But when we started, there was no NOVA. So NOVA did not exist. There was just ideas in the heads of Udo Steinberg, but no code. So how to build a user length for a kernel that doesn't exist, that's really a hard question. And so this situation forced Christian and me, basically, to plan in terms of multiple interim solutions. So we want to eventually become the user end of this future kernel. How can we remove stop gaps or stumbling roadblockers, basically? And the approach that we took was we decided to target two different kernels at once. So since we did not know how NOVA would look like, we assumed the worst, basically, and tried to come up with a user length that works with really two opposites ends of kernel philosophies or kernel designs. On one end, we have Linux, monolithic kernel, where the kernel is scratched, executable from a file system, where our memory management happens in the kernel, where you use sockets to communicate, and so on. And on the other side of the spectrum, we use an existing L4 kernel, the L4 flasker kernel, that is completely different. It uses synchronous IPC only as a mechanism that has a page fault protocol in the kernel. So pretty much the opposite end of Linux. And we figured that if we managed to somehow accommodate both of those kernels with our user length, we should also manage to later fit it in with NOVA. And yes, this was actually quite surprising how well this worked, actually. So after just six months, we came up with a prototype that worked on both Linux and Fiasco. And we had graphical applications that we could play with. So this approach actually worked. And we found that it was beneficial in several other respects, other than just removing this road blocker. We found that it quickened our development cycle a lot, because when developing on Linux, we can just use it, start a genome subsystem as a Linux program or a bunch of Linux programs, use GDB for debugging, and use all the convenient Linux utilities. If we go to the hardware, for example, if you have the device driver, we can use the cozy and nice kernel debugger of the Fiasco kernel, so we can use multiple debugging tools that could otherwise not be used in combination, for example. It also stressed the robustness of our own code, because the kernels differed in so many respects. So for example, the scheduling, how which thread is executed, or at which time. This kind of different behavior uncovered some kind of bugs in our code that would remain undetected otherwise. And when it comes to performance or strange behavior, sometimes you want to kind of investigate performance problems. It's always nice if you can execute the exact same components code on a different kernel to see whether the performance difference comes from the way the kernel manages resources or the way IPC communication happens, et cetera, et cetera. So it gives us a way to cross-correlate the behavior of the system between different kernels. And it let us think about user-level requirements. So at the beginning when we started our work, we had not a big idea of what the application-level requirements really are. There are so many applications you can't know, really. But with talking different kernels and seeing how different kernels solve various problems, we somehow got a grasp on what the actual problems are, not just how the solutions work. And obviously, there are many different micro kernels not just for fun, but also for very, very good reasons, because there are very different requirements. So different hardware platforms, for example, also hardware features in some areas, virtualization support is needed, and in other areas, some other things like trust zone, then the kernels have different kernel interfaces, different scheduling policies. These are different security models. And of course, the implementation quality varies a lot and the community behind the implementation. So there are many, many different aspects why kernel exists, why multiple kernels exist. And let's face it, if there was one best or fitting microkernel, this wouldn't be micro anymore. Yeah, of course, it would contain everything. Okay, so the question that always comes up with our kind of feature that we support multiple kernels is how do you manage to maintain all this stuff? So right now we are maintaining support for eight different kernels and some of these kernels are more like historic artifacts, but some of them are pretty recent. And the answer is quite nicely expressed by this slide. So our team is responsible for this code base. So it's a quarter million lines of code that we have to kind of nurture and maintain. And if you look at the kernel-specific code, for example, code that we specifically developed for the Linux kernel, it's just a tiny, tiny fraction of this, like a 1% or 2% or even less than 1%, like in the case for the old fiasco kernel, it's just 1,500 lines of code. And everything else is just completely independent from the kernel. So from the maintenance point of view, that's reasonable. So we can maintain the 10-year-old fiasco kernel for Geno, that's not a big deal because we only have this kind of 1,500 lines of code we have that needs to be maintained. Here in this table, the last number is interesting, the space HW kernel. This is not just the Geno-specific part of the framework, it also contains the kernel code. For this reason, the number is higher. And with kernel code, I also included all the drivers for the different boards, the all the board support packages and tests and so on. So for this reason, the number is a bit higher because there's much more substance in there. Okay, so from this point, from this realization and from this kind of success story from an engineering perspective, we came to a kind of new light motif from a project perspective. So we figured Geno has the potential to become something like projects for monolithic systems. In monolithic systems, you have one unified interface that is targeted by all kinds of applications and those applications can then move from various project systems from one to the other. And with Geno, we can actually build the same kind of notion to have one common ground that could be targeted for applications or components. And those applications could then suddenly run on all the different microcronals. So one example, on Geno, we have now around, let's say 400 different components. And once we enable the support for SCL4 in the last year, all those components, including device drivers, wireless support and all these kind of nice things, they can be executed on SCL4 suddenly. With just writing 3,300 lines of code. This is, I think, quite nice. So this is the vision that emerged and this led to this decision to cultivate this kind of feature. So first, I want to show a bit what are the key aspects of reaching this nice interoperability. So one thing is that we started with just throwing out some assumptions that we found inconvenient. So I don't cite the air invalid, but we found them inconvenient and they somehow clouded our view. So first, in the L4 committee, that's where we came from, everyone was obsessed with performance and scalability. And this was really, everyone in this community, they measured IPC around trip times, like the ping-pong benchmark. This was the killer application of L4. And people there are writing papers and papers around optimizing these kind of things. And so if you come up and say, ah, I'm adding a virtual function in this kernel, in this IPC fast pass in the kernel, they would go mad at you and we thought, oh, this is really getting too, we don't want to justify everything in front of these kind of concerns. We just say, okay, scalability performance is fine, but we are addressing this later. POSIX is the same thing. So people expect POSIX because they have so many applications that are based on POSIX, but POSIX has a lot of things that are really ugly. So for example, re-entrant mutex, things like that, where even the creators of these abstractions confess in hindsight that this was a big mistake. And there are other things like thread-local storages or the POSIX thread API, which is full of legacies and full of compromises and we don't want to deal with all these kind of problems. So let's discard them for now. Thread-local storage, we had this topic now in Martin's talk. Martin, ah, Martin. It's a pain. So, and it sucks really. Thread-local storage is something that goes behind the back of the language. Access is some GS register, picks out a value and once you execute the same function again, it behaves different or the difference thread behaves different. It's just bad. It's really bad. It's really the opposite of what rust, the rust philosophy, what we heard in the previous talk. It's really as bad side effect that needs to die from my perspective. So, so we said, okay, we will live without all these things at first and we consider them later once we have reached a certain useful state. So, the next thing was we drafted a design of the architecture. So this is the third time I have seen this term now on the slide. So, yeah. So, yeah, we really started with a blank sheet of paper and thought, how would our operating systems of our dreams would look like? And we started not with file descriptors and all the notions that we really are knowing from our history, but we just started from what do I really want? And we just want to have components that we want to stick together where one component can control other components, where one component can control the resources it has and assign them to other components. So basically our idea was this kind of hierarchy structure where for each component in a system we can see from a clear picture what are, is the trusted computing base of these components? So like for this component here, this one needs to trust all the chain of parents but it doesn't need to trust any unrelated things like sandboxing on each level. This was our idea but I don't want to go more into the detail here but this was important to focus on this kind of, kind of architectural vision. I don't know, that's too... Okay, so when looking at this kind of user perspective or architectural perspective, we can simplify a lot of things at the API level. So for example, if we look at the construction of processes or tasks or protection domains or however we call them, this is always a really, really tight interplay with the kernel. A lot of technicalities like all the flags that are passed to the Linux kernel when you do a XAQVE or a clone system call, all these kind of specific things you have to take care when creating a new L4 task, setting up a page or and defining whatever configuration arguments RACX and you see it's really complicated. And but from a application level point of view all these technicalities are completely uninteresting. What the user wants to achieve with a new component is it has an elf image or an elf binary. This is the thing that contains the program I want to run. So I want to run this program and I want to control this program. I want to say what does the program, what is the program allowed to do and what not. So I want to impose my policy on the program and that's exactly what our API covered and no further detail was exposed at the API. And so basically we just satisfied precisely those requirements and nothing more. And for this reason, for example, we have one API that when implemented on Linux, it uses a fork and XAQVE but when implemented on an elf kernel, it uses a bunch of system calls, a sequence of system calls, sets up the memory management fills a region map for the new component and so on and so on and has an elf loader and so on. So on Linux, the kernel loads the elf image and on F4, we have an elf load in our base framework and but from a user perspective, everything looks the same. It doesn't matter, the application doesn't care. So the next key aspect of course of microkernel based systems, they are designed around IPC and if you look at IPC on the kernels, you see things like that. The terminology that's just frightening. So you just have to deal with message transfer descriptors, capability to receive descriptors and UTCB user level threat control blocks and hotspots inside the receive window that you set up for your flex pages and so on. It's daunting, really. And all the different kernels, they have different terminologies and these terms were not invented for no reason. They have certain semantics, of course. And the argument was always, yeah, we have all these complicated mechanisms but we can always hide all these things with IDL compilers. So people started creating IDL compilers like in our research group back then in Dresden, we had an IDL compiler consisting of 60,000 lines of C++ code and tried to make the whole thing more convenient but in fact, it failed completely at that. So there are all these kind of technical details showed through at any turn because people wanted to use these features of the kernel like wanted to communicate a flex page from one address space to another. Yeah, and so the IDL compiler tried to accommodate those features. So looking from the application's perspective, so the opposite end, what do we really need or what do we really like to have? We want to have simplicity. So we want to have a simple nomenclature and for this reason, Genote introduced these terms that did not introduce it. It just defined it in the context of Genote. A short list of terms that each of those are very simple and it restricted synchronous IPC to the very, very simple space which is basically a function call. It's completely synchronous. It goes always to a server and back to the client. So you have this relationship between client and server. So these roles have clear connotations. And the best thing is that all these kind of notion of a function call is supported by C++ templates which just generated the RPC stop code for us. So we can always feel like working with a normal object-oriented language, but in the background, the RPCs are realized using system calls to other protection domains. And this is basically what we found sufficient for all the use cases that we had. Then, of course, in the previous talk we had here the benefits of capabilities. This is, of course, when you design a new system and a very, very nice aspect, you always are faced with the questions how to structure your file system or who knows the namespaces of whatever, network cars, hardware and file system types and so on and so on. But with capabilities, all those problems suddenly disappear because you don't have to define global namespaces because global namespaces do not exist anymore. Really cool. And so this took a big burden from us. And on the language level from an application developers point of view, we just used C++ smart pointers to manage the lifetime of those capabilities. So it feels really natural to deal with capabilities. Almost like if you deal with C++ references or smart pointers, shared pointers. Then we also introduced the notion of asynchronous notifications. But without payload, that's important because payload is really messing things up. You have to buffer things and you do asynchronous notifications. So this is basically like interrupts and nothing more. That's really a very, very rigid scheme. But the receiver of a signal can tell different sources or type of events apart from each other. That's important. So in summary, we have just a convenience C++ API with no optimizations, no micro optimizations and thereby we removed a lot of complexities that the different kernels normally introduce. The next topic is virtual memory management. I already mentioned the way of how new processes appear in the system or new components are bootstrapped. It's tightly related to memory management and on L4, a very central piece is this user-level page-fold handling. So Martin, you are now going in this direction. But in my personal opinion, this is really overly complicated. Just user-level page-fold handling, like done by an L4, it causes a lot of problems. Like for example, if you map a memory page from here to there and then you later want to revoke this, you have to, the kernel needs to remember this kind of relationship. But if the kernel needs to remember some things, the kernel needs memory for this. So the kernel needs memory allocator. So the memory allocator needs a backing store where does the memory come from? So you have lots of problems. And so we defined a higher level abstraction for this kind of problem. This is basically our take on the data space term. So data spaces were invented far earlier, like in the S3 system. But in our, but we have a much, much simpler notion of this. Data space is just a piece of physical memory, contiguous physical memory that can be referred to by a capability. So like if you do a, like in Linux, if you open a file and then have a file descriptor for this and use this for MAP. So this is basically what a data space can be used for. So you have a data space and you can attach it to your address space. You can also share it with your peers by delegating the capability and thereby you can establish shared memory. But you don't need a file system, for example, and you have a very simple notion of who owns the data space. So the owner can also destroy it. And so the kernel doesn't need to remember how mappings are established in the system. Really simple, primitive. And the last crucial point in kind of making this kind of inter-kernel work seamlessly possible is of course tooling. So if you discover a kernel, like if you download one of the micro kernels or the Linux kernel or whatever kernel, you have a lot of kind of topics to look at. First, the kernels are distributed in different forms like Git, SVN, TAR, Archive, and so on. The tooling looks completely different. So they're like, okay, four use scones. Some kernels use the GNU build system. Other kernels have handcrafted things. So this is all things you need to, in some point, understand. The kernel bindings, of course, are interesting. And if you're from kernel to kernel, some kernels come with certain assumptions about the user land that runs on top. So there are even kernels that ship a complete libc as part of the system core bindings. And so you have to kind of find your way around it. Then the whole way of how a system is that uses the kernel, is integrated, configured, booted. This is all different from kernel to kernel. So if you step into a land of a new kernel, you have to figure all these things. You read the documentation, and this knowledge cannot be translated or transferred to a new kernel. So you have to just face it. And this leads to certain costs and with respect to the exploration and the self-education. So, okay, it's important to not bother the user with these kind of things. And for this, we spend a lot of energy with our creating tooling. First is, we build a system, a ports mechanism that allows us to integrate a third-party code clean into the system. So third-party code is not hosted in our code tree. It's cleanly separated. It's versioned, content-addressed, like it uses a hash, like the nix package manager. It uses hashes to distinguish different versions of third-party library, for example. And a kernel is just a piece of third-party code that we import using this mechanism. So there's one uniform mechanism that we can use, and this hides all the different details of how sources are distributed. You just have the notion of ports, and the port can be, for example, Nova, which is the Nova kernel, or it can be Lib C, which is the C library that is fetched from 3BSDs, SVN, server, and so on. But for the user, this doesn't matter. Then we came up with a nice tool that automates our workflows. So the whole problem of building components and configuring a system scenario, integrating it on a disk image or on an easel image, booting the scenario, evaluating the results. So this is all kind of formalized using this tool. And these scripts that are written for this kind of tool, they do not contain inherent kernel dependencies. So you can take this script and just execute it for a different kernel, and normally it works. And then another key aspect was to come up with a, or not to come up, but to customize GCC according to our requirements, and thereby remove all this diversity of tool chains, which was a big headache before. So we had to deal with six different tool chains switching back and forth, and each had a different kind of configuration arguments, different GCC versions, was a real pain. But just by unifying one tool chain, this was a big benefit. So the result is this kind of picture. You have these big kind of toolbox of components that are all running in user mode, separated in different address spaces, communicating using the IPC mechanisms that Genote provides. And in the center, you have just various kernels, and you can just pick the kernel that you like and run the components on top, on this particular kernel. At least on the API level. This was one year ago. So basically one year ago, when I presented Genote, I could show how the same kind of scenario could be executed on one kernel, then I could just create a new build directory for another kernel, compile the same thing again, and then run it on another kernel. But now we go a step further and say, okay, we don't want to even recompile the components. We just want to run them. So to achieve binary compatibility for components, we have to look at which kind of kernel specifics taint the components or the application code. The first is, of course, that application code somehow includes kernel headers, not always directly, of course, but indirectly, like for example, if you have included Lipsy header, this Lipsy header includes something like this, this includes something like this, and this includes something like L4 kernel types, then you are basically tainting your entire application with kernel types or with certain definitions that are different from one kernel to the other. And this also is, of course, utility functions and so on. So, but these kernel headers are usually not so big for microkernels. The second reason components are intertwined or with a kernel are, of course, system calls. So components tend to issue system calls for doing IPC or to do synchronizations and for these topics over here for creating kernel objects. And in most microkernel-based systems, what you see in the components code is typically also direct issuing of system calls and the bindings are somewhere in some header files and those header files, they are included, of course, and so the component inherently inherits or the dependence on this particular kernel. So for DPD coupling, the user land from the kernel, we had to remove those kind of things, those kind of dependencies. First, we had to, so this was a lot of engineering work, I must confess. So for example, at the beginning, we defined a common capability type with a defined API but with a different implementation. So for example, on the old fiasco kernel, the capability was basically a thread ID where a message should go and a unique, globally unique value. So this was on the thread ID was, of course, a kernel type, so it included a kernel header. On Nova, there was no global ID needed. We just need a kernel capabilities sector for this. So the kind of representation of a capability was different from one kernel to the other. And on Linux, for example, a capability was basically a socket descriptor and a number. And so we had to somehow unify those capability representations on the API level. IPC message buffer layouts is the same thing. So each kernel has a different notion of how an IPC message looks like. So certain constraints and certain header fields that you need to fill out when you want to send a message. And we just defined our own IPC message buffer layout and which is just a very, very simple buffer where bytes can be written to and which is a size. Thread manipulation and synchronization, like if you would, for example, take a log and need to block, then you issue a system call and we just reshaped our API in a way that no kernel-specific things needed to shine. Okay, the result was that we just managed to get rid of all these kernel-specific header files and moved all these into the implementation of the framework. And the next step was, of course, then that we needed to also address the second part, tainted compiled code. And for this we decided we need to somehow have different elf objects that contain the generic code that's untainted by the kernel-specifics and a different elf object that contains the kernel-specific code. And this is where basically the dynamic linker enters the picture. The dynamic linker is a special component, it's both a shared library and also a program that can be started normally as a program, like on Linux. And the rules are different. So at compile time, if you build a component, the linker is needed because the real linker, like the binutous linker, needs to know which symbols are provided by this library, by this dynamic linker. Like the symbols that are provided by the genote API, they are somewhere implemented in the linker and in the dynamic linker. And the real linker needs to know that so that it can add all the references, reference information into the binary. So if the binary issues an API call, a genote API call, the code would, the resulting elf executable would know, okay, this has to go to the dynamic linker at this particular, and with this particular simple name. But actually at this point, when compiling a dynamic link binary, you don't need the code that's in the shared library. You only need, the linker only needs to know this kind of contract between the shared library and the shared library user. And at runtime, then we need, of course, the code. So, but one point of the other, at first when using it at runtime, the linker is started instead of the actual binary. So in a new component, first thing is that the linker starts living. It bootstraps the actual executable and resolves all the undefined symbols of present and executable with itself. And then loads other shared libraries and finally it starts the real application code. So this is basically the pro, the kind of progress that we did. So once we had this genot API freestanding, and once we had this kind of, the logical consequences that the linker becomes generic. The API of the linker, the binary interface, the contract that the linker offers, this becomes automatically generic. And this is exactly what happened. So let's go to a bit more detail what is that API in a genot kind of terms. So with ABI, I'm referring to the link time or the linking contract between ELF objects. So I'm not talking about architecture, of course, calling conventions and so on. So are also referred to as ABI, but that's not our concern, that's the machine specific thing. But our concern is the link time contract between components. And so an ABI in this respect is defined as a bunch of symbol names, a list of symbol names like what you get when you do NM with an object. And the symbol types and some metadata. Like for example, for data objects, you need the size of those data objects so that the dynamic linker can execute code relocations, for example. So these points are not so interesting. This was just kind of the methodology, how those kind of information was generated. So we took concrete instances of the dynamic linker, aggregated the symbols that were in there, looked at the symbols that we didn't not like, like all the weak C++ symbols. There are so many of them and they are so big. So we removed all those. And then we stripped out calls to global symbols that are for example, just there because framework internal compilation units needed to interact with each other. And what comes out of this is just a very small file and just a list of symbols with these kind of symbol information. And this is basically, we want to do the same thing for X8664 and also for ARM and so on. And it would be really, really nice to have one ABI, so just one 32 kilobyte file that covers all different architectures. But really, when I started this, I did not know. So there is this kind of mysterious mangling of C++ symbols. It looks really ugly and I don't know how this works and how stable this is. And I just tried it out and for ARM and X86, wow, that's almost no differences. I was really amazed. So this was not so unrealistic in this slide. But then I tried the same for X8664 and it was really, it was a let down really. And why that? This is the reason, size T. This is really something historical. This is a size T is typically defined as underline, underline, size type, which is basically a compiler known type. And every libc and also our genie API used this kind of size type. And now for historic reasons, this size type was defined for X8632 bit as unsigned int. So 32 bit platform, int is 32 bit. She feels good. So we can just use unsigned int. But then when moving to 64 bit, int is no longer 64 bit. It's not the machine word size, but int remains 32 bit. So we needed to, or the size T type needed to have a different type than int, needed long basically. Otherwise, size T would not work anymore. So basically the historic mistake was basically this one. So if in history people had used unsigned long all the way, then we would be fine. But back then there was this kind of decision, long int doesn't matter, we take int. So this was basically, this is biting us now. And the problem comes into the picture not for C code or so, but for C++. So the course in C++ symbols for methods of functions contain the entire method signatures. So not just the method name, the function name, like in C, but also the arguments. So if you look for example, this function here, connection upgrade with the size T, you see that at the linker level, so the mangle symbol looks different just in this small detail. But that's different enough to be different. And so that means all APIs that use size T into some form as an argument, they obtained the symbol information of the code that comes out of this. Okay, so we are quite pragmatic here at this point. So we just decided size type is not so nice for us, so let's not use it. Instead you define our version of size T as an unsigned long, completely independent from what the compiler tells us. And yeah, of course once you enter the libc into the picture, so for higher level components, then we again get this problem of course a libc has its own type definition for size T and this again uses a size type, so this is not a real long-term kind of fix, but we are fine for C code, like all the GNU tools, for example, GCC, Vim and so on, they are just C code and they don't use this mangle C++ symbols, they just used normal symbol names, function names, so we are fine with these. The problem just comes back when we are using C++ code that depends on a libc or these libc types, like for example our code that uses the standard C++ library basically, like QT5. So we need to find a long-term solution and there are two approaches. Of course we could make the ABI's architecture dependent, this would be really bad because if you look for example at QT, if you just do an NM for QT library, like QT widgets or so, you end up with size of like 20 megabytes so and having multiple versions of these in your git repository, now that's not the right way. So the other solution looks much more appealing I think, so just to explain to the compiler that we really want unsigned long regardless of 32-bit or 64-bit. Okay, how much time do I have left actually? Okay, so with these kind of, with these steps, I was actually able to compile one program on Linux, for example a QT program and then take the same program and move it to a different build directory, like a build directory of Nova and execute the same program on Nova. So this worked and this was like a magical moment for me because yeah, that was quite, the idea worked out in the end. And but how, but this kind of brought up the idea to generalize this mechanism a bit more, like to introduce the formal kind of term of ABI into the regular genome build system and for this we extended the build systems a bit. So first an ABI definition comes in the form of this kind of ABI file that contains the simple information and there's a rule that translates this into an assembly file. This rule is quite simple actually, I can maybe show it to you. So for example, if I look into the build system, this is the tool, that's all. So this is the code that the build system support that takes the ABI description and generates this assembly file. And you see that this assembly file is actually not including any assembly instructions, but just directives telling the assembler which kind of symbols to generate and the symbols have actually nothing behind it, okay. So this assembly file is then compiled into a shard object and this is named ABI.SO and this is a shard library that contains no code, nothing of substance, but only the contract of the shard library provides. The actual targets are linked against this ABI file instead of the real library. So if you have for example, free type and you have a program that uses free type, you would not link the program anymore against the free type library, but the free type ABI, just the contract. And for this reason, we can just, yeah, we can do this ABI mechanism for any kind of library. For example, introduce an ABI for POSIX programs by defining a symbol file for the libc. And what comes out of this is that we can now build targets without even having the libraries that depend, that the target uses. So we don't need, of course, the header files, so the API level contract between library and the user, and we also need to have this ABI, but we don't need the library implementation. Okay, the immediate benefits here is that we can optimize our workflows with Genote a bit more. So for example, in the past, Genote had one build directory per kernel and architecture, so for each combination. So there was a huge number of build directories that we dealt with, and if we wanted to reproduce one scenario on another kernel, we had to rebuild the whole software stack and this was taking a long time. So now we introduce these kind of new, unified build directories. So they are specifically created for one hardware platform, but the kernel doesn't matter anymore. So all kernel independent components, which are almost all components, also device drivers and file systems and so on, they are automatically linked dynamically, and there are only a few components left that are specific for the kernel, in particular the dynamic linker, of course, the core component also, and some kernels, the timer driver, and some optional components like virtual box that use special kernel features. They are exceptions to this scheme, but they are always named after the kernel and thereby they can coexist in the same build directory. So the concrete kernel comes into the picture, not before we actually execute a scenario. And I can maybe show this here. So for example, I have here a build directory and I can say, okay, I want this geno demo. I want to run this on Linux, for example. And you see that now the build system visits all the libraries, yeah, it's far too quick now. But I can show you a different view on the system, like for example, when I am opening this window here, and we do a PS, we see that it really runs on Linux. You see here, all the genote components are actually running as Linux processes on the system, and we could, for example, attach GDB to any of them. And yeah, and so this is basically here, you can see in the output, it uses the LD Linux, libSOS, chatlibs, as dynamic linker, and then starts this up. So if I want to run the same scenario on the Nova kernel, I just tell this build system I want to have the Nova kernel, and you see that the build system doesn't do anything, doesn't need to do anything, but only at the last step, if you look at the, yeah, you see here, now the Nova has already started. This is now spawning a QMU instance, so it takes a bit longer now, but you see that in general, you see the same scenario running here now on the Nova kernel, on QMU, yeah. And if you look at the output here, you see that at the beginning, you see Nova hypervisor starting up and so on. And if I want to run the same thing on SCL4, I can just, so this is more like recent developments, it's not entirely stable, but let's give it a try. You see there's no time spent for rebuilding specific things, and all the kind of kernel specific things, how a system is integrated and booted are actually not of a concern. You see a bit more warnings because we have more debug information here, but now the same scenario works on SuperTux. SuperTux, no, SuperTux, SuperTux is a good example. This is another thing that I can just try it out and it works from, I can't, yeah. Okay, SuperTux, so my children can now play SuperTux on SCL4, which is, I think, an accomplishment. Okay, future. So now with this API mechanism in place, we can take the next steps, and I think they are quite exciting, because of the distinction between API and API and this kind of introduction of these kind of terms, we can actually define packages that are really loosely coupled. So for example, a target package only depends on a bunch of API packages, but not on specific implementations of this API. And so the system, the updates of systems becomes more lightweight because we don't need to fetch all the transitive dependencies of libraries, but only the dependencies from APIs which are much more stable than the implementations of these APIs. And of course, the distribution of binary packages becomes also much more simpler because they don't need to be, they can just be provided for an architecture, but not for a bunch of different kernels. We can, as I already mentioned, introduce multiple levels of API stability. So for example, if we say, okay, we have another kind of ABI in the system, which is POSIX, we can actually change the genot system including the genot ABI, but retain the compatibility to all the POSIX applications like GCC and so on. So they don't need to be recompiled even if you change the genot ABI. And now this allows us to work in two different directions, actually. First, we can harden the foundation. Now that we have a contract between the genot applications and the genot framework, I have to, yeah, okay. Oh, sorry, okay, so then thanks for your attention.