 So our next speaker is someone who will be talking about sculpting in the stage show. Thank you for having me here. So I must say I'm quite excited about this talk for several reasons. First, there can be many things that can go wrong. So please be here with me if there is anything. Just I have to excuse me while Samuel was speaking. I was always, I was troubleshooting things because something went wrong with the network. So I tried to find a workaround. So let's cross fingers. And second thing is I'm very excited because this talk somehow marks the transition from Genote being a more or less project for really tech enthusiasts towards a project that captures the interests of actual users. So in in autumn last year we finally made a jump at our whole team which from Linux to Genote. So I did this some years back already but this was really like a kind of masochistic experience. But now the system was good enough to invite my colleagues and nowadays now all of my colleagues are working on Genote and this is basically the system that I want to present to you today. So in the talk I will first explain you the idea behind the system that runs on this laptop here. I will go through the steps to enable networking and storage. So this is the backing part, so let's see. And then I will give you some idea how the sculpt development is going regarding the extension of the system. So first, what's the concept of this Genote scenario? So the idea is to have a really, really small static system on top of the microcunnel which fits easily, for example, of course on this USB stick. But to give a number is less than 30 megabytes of an image. And the idea is that this could go into something like a BIOS or so. So the goal is that this static system is very small and it won't be a moving target, so it can be stable over a longer time period. And it will basically enable the next steps on top of this to be dynamic and expandable. And then on top of this static system we have basically three worlds that are actually quite separate from each other. On the left you see a driver's world. These drivers contain only the device drivers that are fundamentally needed to have some form of interactive system. So I want to type in some commands, I need to see something, and I also need to store some data. So these are the three fundamental things without these things I cannot use a computer. And so these drivers, for these kind of functionalities are there. So there's no natural king or no other, no audio, no convenience things. They're only the really, really bare-bone basics are there. But you'll see this is already complicated enough. And then on the middle of the system we have a sub-system called light-central, just to be a bit kind of non-conformant. Okay, so this is basically a control center for sophisticated users. So it's not meant for somebody who just 0.6, but you need a certain degree of sophistication to use it. That's why it's called light-central. It's like really you have these knobs and you have to be a very educated person to do this. So this is something that gives you full control about your machine. You can do everything. So you are the user, you are in power. And then on the right side, we have something that you as a user can create and can you manipulate. And the idea now is that the light-central can be basically an interactive interface to shape the runtime dynamically. Okay, so let's take a look at this static system. What is in there? Basically, what we need is, of course, we need to multiplex the displays among several components. Of course, we want to have the user type anything in the light-central, but also interact with the runtime environment. So we need a very multiplexer. This is the role of the nitpicker grease server. It's a small component. I think it's about 2,000 lines of code, no dependency whatsoever. So it's really a small component. So we can trust it. Then there are two pie system components. These are just wrong pie systems. There are memory. One pie system is used to capture reports. So basically, components on top of the static system, they can produce reports. They can speak about their state. So for example, when I put it up the system, I wondered, ah, the screen resolution is a bit strange. I could interact with you, look at the report of the interface driver and see what connectors I have and what resolutions I have. So these reports are all coming in and they are basically nicely put into this pie system. So the components do not need to know that they are the pie system. They don't know the concept of the pie system. They just know the concept of a report, report something, and that's it. The counterpart is the conflict pie system. This shows configurations. And these configurations are made available to the other components as files, but only as data blobs. So when the configuration wants, when the component starts up and it wants a configuration, oh, I have to, I think I have to use them. Oh, sorry. Okay. So the component with a requested configuration, it will become, it will get a basic data blob, but it won't know that this is basically alternating from a file. And the idea is that the light centrale has access to the real thing, to the real file systems. All the other components are just a bit, they see a firewall kind of view, but the light centrale can really get into the real system. And then we have some form of global policy that we can establish in this static system. For example, which key to press to enable the light centrale? That's the global policy. Okay, let's have a look at the drivers. What's in there? So this is basically a kind of simplified picture, but it's still complicated enough. You see some concepts working. First, there are some drivers we know we have to start. So for example, we know there's a USB driver. We need a platform driver to access PCI devices. We know that we can start a PS2 driver. These devices are always there, so we can just start them. But for other drivers, like for example, the framework for driver, we don't know which driver to start, because this is something that we need to detect at boot time. We want the image to be usable across various machines. And some machines have Intel graphics, so we want to start an Intel graphics driver. Another machine has just VESA support, so we want to start a VESA driver. Same for storage. There we have to decide whether to start a SATA driver or an NVMe driver or something like that. So these are decisions that must be taken dynamically. And the way how this works is that we have basically a so-called driver manager, which is a component that does not touch any hardware resources or complicated things. It just gathers reports from different components, like for example, the report of PCI devices, the report of USB devices, and also the state of other components, and it generates configurations. And for example, it generates a configuration for this component, which is basically an instance of GNUTS-init component. So we have on top of the init component or the root of the system, we have here another instance which is just a tool for us to have a dynamic runtime. And now the components on top are just some kind of minions for the driver manager. So if the driver manager wants to find something out, it can spawn such a minion by just configuring this dynamic init, then the process will run. We don't need to trust really this guy, but at one point it will produce a report, and then you can look at a report. The trusted computing base is basically influenced, of course, by this small component, but this is just, let's say, 1,000 or 2,000 lines of code, and all the complicated stuff like the interdriver with 50,000 lines of code is living here. That's basically the idea. So all this complicated stuff, there are many details, but in the end what counts is that all the sub-system provides a bunch of services which are basically geno-session interfaces. So from the outside, you can just use these services. You don't need to care about the way how those go get into the system. So you can abstract away this. So this is basically, you can consider this as a black box. You can use it. You don't need to care about this. So next thing, the light centrale. What's behind this? OK, we need a way to interact with the computer. So we could invent a new shell or something, but actually in our last 20 years, we really have thought about Unix and the Unix shells and so on. So we want to use our beloved tools also in our context. So the idea was to just use a kind of user-level Unix implementation that we can use as a tool. So this is just a nice front-end basically that we can interact with file systems and we have a terminal, which is nice. We feel at home at terminals. So and technically, the programs that we run, the Unix programs, are just recompiled binaries like recompiled bash and recompiled co-utils and we can run it on top of this kind of Unix kernel. So this is basically a building block and for the light centrale, we use this building block in two instances. So we have one instance that lives on the left side, which is the control nukes instance, which lets us change configurations and shape the system. And on the right side, we have this lock nukes, which just gives us a quick indicator of what's happening in the system. This is just to have a quick look if everything is right or something goes wrong. And in order to integrate them into the system, we have the stack of components here. So each component is actually a different address space, really. We have a component that translates the terminal session interface into a frame buffer and input session interface for input and output. Then we have another component that translates those interfaces into the nitpicker session interface. Then we have another component that takes the nitpicker session, provides the nitpicker session interface and implements a fading feature, which lets us turn off and on the client, which is a fader. So it can basically compose those components to build these nice stacks, and can you reuse those? And those live inside one instance here. Okay, and then in the static system, we have this nitpicker instance that I already explained and the reportFS and the configFS. And basically, those kind of unique instances have those five systems mounted under the mount point config and report. And here the log nukes has just the report mounted. That's basically the idea. And then we have a global policy, which influences the fading of the light centrale. That's it. And I can invoke the global policy by just pressing F12. So you see here, this is now the light centrale. And on the left side, you see the control instance. On the right side, you see the log instance. It's very small, just because for me, if something goes wrong, I don't bother to look at this way. But in the normal life, it's not so interesting. I also can look at the log by just saying, I want to browse this in WIM, for example, and can also look at the same messages. And now I can start tweaking the system. So we are still in this initial boot system. We have not touched any storage. Everything lives in a run. So let us, for example, look at the configuration of the light centrale. So this is accessible as well inside the config FS. So we can open it. And, for example, maybe the font is too small. So let's increase the font. So I can look around and see, I can configure the font size. So let's see, now we have a bigger font. Or I can actually control many aspects. For example, if I want to change the alpha value of this control thing, I can basically change this value and you can see the effect immediately. So you can basically, for example, if I want to change the position of this thing, I can modify this coordinates here. That's a bit too big. I just see you can even change the geometric composition. So these are just components. We can configure these components. We can tweak this system even at a slow level. When speaking about tweaking the system, there are some things that you will surely need as a Wim user, for example. I hate this caps lock key, for example. And the first thing I always try is to remap escape to caps lock. That's the only thing I consider a good situation. And so right now, if I type something, if I type something and then I type caps lock, I see, ah, yeah, bam. And especially bad when being inside Wim because bad things happen all the time. So to change this key, this configuration, we have to look into the driver manager because there are all the input events come in and they are basically going through a so-called input filter. The input filter is a component that gets the user inputs from the different sources like PS2 and USB. And those input events come in here. And now they pass a filter chain and we can plug these different filters together by the means of the component configuration. So for example, here we can remap some physical buttons. Here we can generate some characters to attach symbol information to low level key codes. Or then we can merge different input sources and then we produce one output. And this output then goes to the nitpika GUI server. So I can, of course, edit this, oh, caps lock again. Okay, I can edit this as well. So this is part of the, I can show you, this is part of the driver's configuration. So if I open the driver's configuration, you can actually see it. Input filter, it's basically over there. And here we see the input filter asks for a configuration. This is this one here. Then we hand out a ROM session from the parent with this label input filter.config. So this is basically a rule to, you can split the file into smaller pieces. And so I can open this file here. And here you can see a nicely commented out this important option to remap caps lock to escape. So let's write it and look again if this now happens. I hit caps lock, ah, it looks. So, or if I want to, for example, have this mouse here, it just moves a bit too slowly. Okay, I have a filter in here for mouse acceleration. So I can basically increase this value and then the mouse is suddenly very fast. Yeah, and you can see also things like the filter for emulating a scroll wheel with the track point. They are very important as well. So these are all things that are in this input filter component and you can tweak it interactively actually. So if I put in some wrong value there, I may wreck the whole thing. But this is about full control, right? Okay, or it likes keyboard repeat rate. You see here that there is a certain repeat rate. If I want to decrease the rate, I change this value here and you can see that now it's, ah, I don't know, that's the rate. If I want to delay the repeat, so I see that now it takes longer time, you see? So it takes effect immediately. And there are many other things, other aspects of the system that I can modify the same way, like the screen resolution and... What else have we in the configuration? You see a bunch of things that I can modify. So for example, I can also modify the nippical configuration. I can say, okay, if I have this background, I can change this background to something new looking. And for example, if I have the slides here, there's a policy for the slides. And I can say the slides should have the coordinate origin of the mouse pointer. So I can say something like origin is pointer. Let's see what happens then. Ah, then the slides are basically attached to the pointer. So you can interactively change the configuration of nippical and shape what's there. And here you see, for example, also the magic keys that I use for the slides, the F1 key, or F11 and F12, and capstroke. Okay, so now look at the third part of the system, the actual interesting part, the runtime. What happens there? So actually there's no answer to this question because it's up to you. Of course, I have some proposals. So let's look into things like storage and networking. Okay, we want to do some information. So we need to use the disk. You know already the disk driver lists in the driver sub-system. So we have something like a default block device over there. So that's fine. And actually the driver reports to us which block devices there are. And so I can basically say, I want to get a report of the block devices and I see I have this AHCE block device and have this information here. So I can gather this information from the drivers. So the same happens if I look, for example, and the screen resolutions over there, I can say, okay, I want to see the dynamic part of the drivers are. We have an Intel driver here. What are the connector information and I see it here. So I can see all the information that the drivers give me. So we have this block device and we know that this exists. And so we can basically, the first thing we need to do is wipe off windows from this machine because it typically comes with windows. So we have to format the disk, of course. And how to do this? We don't need to invent anything new. We just want to use an X file system and X2FS tools and so on. So this is basically the scenario. We have a subsystem where we have a nukes instance, which is this tiny Unix thingy. And we have here a bash co-util and E2FS prox over there, mounted basically. Then we have a block node mounted, which is basically a genot block session. And so the block session, once requested, it travels all the way down to this init component. So this init component can apply policy. So in your hands you can apply this policy. Then the session request passes on to the setting system. There can also be a policy. For example, if you have certain security considerations, you want to block the access to a certain device, you can do this here. And then the session request travels to this block service provided by the driver subsystem. And here the dynamic init can again implement a policy. For example, heuristics, which is the default block device. There are multiple block devices. It may look into some magic values into the block devices to find out which one to hand out. And only if all these policies agree, then the block session gets established. So now, how does such a runtime look like? So let me show you how these looks in configuration language. There's this block config. And here we see basically one to one what's in the picture. We have basically this initfp, which is basically the adapter between initpicker and the low-level frame buffer and input interfaces. We have a terminal implementation, which implements escape codes and so on, and uses as a back end the frame buffer and input. And then we have a nukes instance. And this is this tiny unix thing. And in this instance, you see that I mount some tar archives, courier tools, and so on. I also have this def directory inside this def directory. I have a block node. And this is basically a block session. I can give it a label. And this is basically the label of the block device we want to have from the driver sub system. And then I spawn a bash shell with this environment variables. That's it. This is basically a configuration of this tiny unix system. It fits on a screen with this large font. OK. So let's load this now into the runtime. So loading this into the runtime is quite easy because the runtime just is configured also by such a config ROM. And we just have to change this config ROM. So what I do is I copy this file, runtime block, into the file config. This is basically a file that the runtime listens on. And if I do this, you can see here now. Let me see this coming up. So this is now our new runtime, our special operating system that's only useful for formatting a disk. And if I look at the bin directory, you see here the X2 FS procs are just there. And if you look at the def node, you see this block device is exactly what we configured. So I won't format a disk right now because I have some data on the machine. So I would like to like this. But you can also, for example, do an FS check here if something goes wrong. You get the idea you can now handle this block device. But that's it. You cannot do anything more. And you also do not see anything more. That's also nice. OK. Now let's move on. What happens now? Now we have a formatted disk. We have wiped off Microsoft Windows. So we are almost happy. We also need to access some files on a disk or want to store some files on a disk. And so how do we get there? The answer is to also use a new instance as a tool. And this time, we just mount two file system sessions here under two mount points. Our view for read and write and config because we want to look into the config file system just as a user as well. And you know the stack already here. And on the other side, we see a file system subsystem. Again, an init instance. So you can see the nest thing of these init instances is very easy. And here we have a virtual file system server which basically provides a file system interface that we can plug in different file systems. And in this case, we plug in a Rumpkernel as a file system. So this is basically a child library we can plug in into the VFS. And this, in the end, provides us a file system service. And with this, we can actually copy data from, for example, our config file system to the disk and so on. So let's see if this works. So I first wipe off the empty block runtime by copying over this empty runtime to the config. So now if you look, we know it's gone. And I'll copy this so-called FSR. Shall we have a look first? I think we should. Config runtime FS. So just to get a picture of how this looks. Now we have basically the file system instance. Remember, this is now living inside an init instance. So we don't need to know any details about Rumpkernel, for example. You can just instantiate an instance of init and say, OK, fetch this configuration from this file over here. And this guy provides us a file system. What's behind this doesn't matter too much. There's a nukes instance here, which is also abstracted away behind this init instance here. So we can reuse it for other runtimes easily. So we can use it as building blocks. And that's it. That's basically a new runtime. And I can just copy this over to this one. And we see again another nukes instance at a slightly different position. And what we see here is we have now the Rw files and so let's have a look what's in there. This is basically our disk. And you see here I have something. I have to basically remove stuff here for the presentation. I want to create them later on. Download and public. OK. So basically I try to reduce the stuff on the file system to a degree so all the stuff that's normally there. Now I have moved it to backup. And what's in the VM folder? There are basically virtual machine images. So not interesting for now. But these are the things I wanted to preserve. So and now what we also have is we have basically this config file system. And this is basically what we had also in the light centrality. So now we can access it conveniently also from this normal runtime as well. And if you can copy files back and forth, that's basically the one big use case for such a system. I will now right now copy some file from the file system to the config.fs because I discovered some trouble with networking. And now if you go to the wireless topic, I will copy over some customized configuration over the original one. So that's one of these little quirks. So let's see. I have this download runtime config. Yes. And I copy this to runtime download because the standard version doesn't work so well. And I also, this one I can wait. OK. OK. So let's keep the fingers crossed that this works. So for connecting to a wireless network, the situation looks a bit similar actually. We have at runtime also a system that is quite unaware of the real details about all the driver stuff. We want just to launch a wireless driver. And in the end, this wireless driver should provide a NIC service. And the wireless driver, of course, needs to access to the real hardware. So it does not have access to the real hardware. It has to ask for this. And it does so by opening a platform session. So the platform session is initiated through this init instances and eventually goes to the platform driver. And you can imagine a platform session is something like a virtual PCI bus where the client sees a bunch of devices according to a policy that's in the platform driver. And once the client connects to the platform driver, the platform driver creates a new so-called device PD, which is basically a protection domain for IO, for the IO device, which is basically targeted for DMA transactions. So this is basically how the IO MMU support is integrated in G-Node. The client doesn't really need to know about the existence of IO MMU or how to handle this. This is transparently handled by the platform driver. So the platform driver has for the different devices also the notion of address basis. And these are in reality the IO MMU page tables. OK. So let's try to start this in an example scenario. So I have this so-called download scenario. I first give you a look how this looks like. I have this file system here because it's convenient to also have the nukes available. So you also see the nukes over here. Then we have a change root component that goes to the file system and hands out a part of this file system, which is specified here, as a new file system. So basically you can reduce the namespace of the client of such a download file system. So of course, we need a download location. So let's look into our disk. So this is basically the disk. So let's create a download location over here. This is where we want to download files later on. OK. Not so important right now, but this is the important thing, the nuked driver. So let's comment this out and let's look at a look at what happens here. Just as a word of caution, this guy is quite chatty. So I have to copy this, of course, to the real configuration. So I copy this download runtime to the configuration here. OK. So now you see some red messages, bad things are not so bad. I'm used to them. And you see that there's some guy alive here. And maybe if you are lucky, then the driver has reported some information for us. So let's have a look into the reports of the runtime in the nuked subsystem in this other init. And there we have this information here, the wireless access points. OK, that's quite nice. So we see that the wireless driver has come up and we can see some, it can communicate over airwaves. And now we want to connect to some wireless network. So the connection according to this picture here, let me show it again, the connection is configured by using this Wi-Fi config here, which is requested from the config.fs. So we can basically go into this file, Wi-Fi.config. And then we can insert, first, I think, dual stack is the one that works. Let's see if I can write this. So now you see our driver has reported to us that it reloads the VPAs application configuration. And so let's look if something has changed. We have this reports runtime nuked velan state, and it says connected. So we are actually connected to the wireless network over this passphrase here. OK, now we want to download the file, which is basically down here. So we have already connected with the wireless driver. So now we want to, normally, you could attach a ground-rolling application direct to the SNCC interface. But before the talk, I just have discovered that we had some problems with the DHCP server of the foster network. So we plugged in, I just plugged in the SNCC router in between, so this is basically a component that I just put as bump in the wire component. It implements DHCP for our client, and also it can handle the foster DHCP. So this is basically like a kind of small hack. More about the user-level networking later by Martin. So now let's start this guy, and if everything goes right, then we should, at one point, see some IP address coming up from the real network. Let's see if something is going on here. Ah, we got an IP. That's nice. This is the IP of the NIC router, which is like a nut, basically, a virtual nut. And now we can, once we have this IP address, we can connect the client, the FED URL client, to the NIC router. And this guy is basically a genome-diased version of Lib Curl developed by Emory, and it fetches just the genome license, just as a test. So let's see, comment this out and see what happens. So FED URL has got an IP address. This time, it's from our NIC router. And let's see if the file appeared medically in our download location. There is it. So we can look into the genome license and we can browse the internet. Cool. So basically, that's the basic functionality of accessing a wireless network. So the next thing is, of course, I want to make these customizations permanent. I don't want to configure the keyboard layout all the time after each boot. I want to also use a default wireless configuration and so on. And so the logical way to do this is using this NIC NUC scenario with this file system. And so basically what we do is we can manually copy the content of our conflict file system of the in-run file system into our real disk and then we have a copy of our current state. And so I'm doing this right now. Basically, let's, oh, that's not the right one. This one. So basically, I create a config directory on a real disk. And then, for example, I want to copy my modified version of this input filter configuration. I wanted to copy this into config. I also want to modify the version of the VLAN configuration in this config. I also have this kind of tweaked run times I just mentioned. So I just go into this config directory and I create a run time sub directory. And I copy these kind of tweaked versions. Download run time config to this one and as another one, run time. Okay. And are there other customizations? I don't think right now it's okay. The frame buffer configuration is another one. I tweaked at a startup. So I have this frame buffer configuration and I also can basically preserve this now by copying this over here. So now I have basically a copy of these kind of things, but they are living on a disk. So if I boot the machine, there's nothing happening, of course. So we need to also a way to bring these customizations back into our system after boot up. And of course, there's another run time we can use for this. We can spawn a nukes run time which just performs this copy operations. So basically we have the file system here. We have our config FS and this time we do the reverse basically. And to give you a look into how this looks like, we can look at this load run time here. So we see we have this default file system over here. We have a nukes instance. And here you see that I start a bash shell and I start it as an init shell. So it will reach the bash profile. And in the bash profile you see that I just copy stupidly these files over to our run file system. So it's just reused the same building blocks we have. And now we have got our configuration on our system. Okay, so shall we try to... Yeah, we should. Okay, so to reset the machine I can load this config of this system state. So this is a global system state that I can basically modify. So I can type in reset, write the file, and now it's gone. So now the whole thing starts again. And I hope that the next steps will work as well. Here we are. But one thing strikes me, the frame buffer configuration has not worked out so well. Let me see. This has not... Okay, I have to do this again. Ah, no! Sorry, I forgot to load the load run time. You still have the initial boot state. Okay, silly me. So I have to copy this run time load into this current run time. And now we see nothing still. Okay. But I can have a look at the config frame buffer. Ah, but this is actually looking not so bad. Okay, I have to... Ah, there's some small quirk here. So I have to store the safety file again. There's a small problem in the graphics drive-up. I have to look into it. Okay, so now we are here again. And you can look into the config fs and look for example at this wireless configuration. And now we have back our wireless configuration. So each time, once a day, when I come into the office, the first thing is I load this configuration and then I have this customized version. I have my keyboard layout and so on. Okay, so... Let's go on with the talk. I have to go forward. So these slides are basically also now part of the boot images. It's, of course, extending the system. I already explained the download scenario. And so we can go over this. The next thing is installing software. So, of course, this software on this machine is not so exciting yet. So we want to extend the system. So this is work of the last year basically to create something like a package manager. And the idea is we want to download some archives from some internet server. And so we need a storage location for those downloads. So these are just some place on the disk where we like a download folder. And then we want to extract this information into a kind of blessed place, which we trust, which is basically the location for the installed software. And then we also need, as an ingredient, well, for the download. And we also need a public key from the one who offers the download so that we can verify that everything is fine. And so this is basically then how the process works. Basically, we have to ask, what are we currently missing? So we want to download. We want to install some software. We want to answer the question, what are the pieces that are missing? And there is basically a tool for this, a component that we can query, that we can use to query this information. So this tool gives us a list of archives that we need to get. And then we can look into this list and we see also a hint of the origin of this software package. But we don't know yet where to download it from. So we have to query this information again. So we get this download.ol in public key for these archives. So there may be archives from different sources. So we have different public keys. And then we fetch it. So the fetching is done by the fetch.ol tool that we have just seen in action. Next thing is we downloaded this archives and signatures. So we have these files and we also have the public key. So now this all happens now in the public location. You can verify that everything is fine. And if it's good, we can extract it and then we have this installed. And now we can look again if our dependencies are satisfied and if not, we do another round. That's the idea. OK. So now it's the part that may go wrong. Let's see. So I had to modify the upload subsystem here. To insert also a NIC router. So I have never done this before. So basically I just give it a try. So of course we could do this all by hand by editing these XML files, like all these different steps. But this would, if you have 100 software packages, this would take a while. So now the idea is to have such a download manager component that basically implements a state machine and it just uses a dynamic init instance to spawn the components as needed and move them again and so on. And so the automate the workflow that we would need to do manually otherwise. So OK. Let's just try this. The upload config and to the runtime. I think at the very beginning. OK. The effect driver is starting. You can see this here. And I think one problem that will arrive is that our federal tool will time out because it thinks it doesn't have any IP address. It doesn't look so bad. So the NIC router has the IP from the wireless network. That's also quite nice. But let's have a look in this download. But now it's basically stopping at this point. Oh no, this is the... What's this? It's not the right file, I think. I have to look. Maybe I have copied the wrong file. Update. NIC router. Oh, that's the wrong one. This is bad. That's not good. Let me have a look. Update config. Did I save the wrong file? I saved the wrong file. That's bad. That's too bad. OK. Let's skip the download because I don't want to mess around too much. OK. Basically, it's a bit sad. Otherwise we will run into trouble with the time. So I will now pretend that we downloaded this by basically moving out my backup from this public location to here and also from the depot. OK. So this is normally where the results of the download are stored. So OK. Sorry for this. Basically what's happening under the hood is, when it works, that the download manager basically reconfigures such a dynamic instance. And in the first step, we have this query component. This basically asks our depot. This is our blessed software installation place about missing dependencies. In the next step, we use the query tool to obtain the download information, the public key and the URL from the depot. And so the download manager now knows where to get this from. Then the download manager changes this configuration by spawning a fetch URL tool, passing in this URL information and also the archive that we want to download, actually a batch of archives. And now this is dangerous, of course. We have a component that accesses the network and also modifies the storage. And so for this reason, we use a change root component to limit the access to this particular directory. But we still need to consider that the network may be an attack vector by an attacker that may attack this component because in reality it's quite complex. There is a libcurl and libssl so the attack surface is quite high. But from the perspective of the download manager, it's not so bad because it can never corrupt the state machine in here, it can never corrupt the depot, it can only mess things up inside the public place, which is public anyway. In the next step, we basically kill the network session, so the network is gone at this point, and the download manager replaces the fetch URL by the verify tool, which is basically based on GNU-PG and libg-crypt and so on. And this time, this verify tool has read-only access at the public directory. So even if there is a prepared kind of archive or signature or public key that triggers the buffer overflow in here, hypothetically, the reach is very limited because this tool cannot do anything wrong or do any harm really. And once this tool confirmed that the signature is correct, we extract the information and here we create another change route instance inside the depot. So we basically have a sub directory here for the origin of the downloads. So you can have multiple origins and each one has a different sub directory. And the worst thing that can happen is that the extraction tool pollutes something in the origin's sub space of the depot. So this is also a red line, so it's dangerous, but using the change route we can actually limit the problems here. And again, we use a lot of third-party software here, like libarchive, lib-LZMA, so we do not implement these things new, but we embed these tools into our genote framework. Okay, and last thing is deploying this software. Now we have a bunch of software living inside the depot and so this is now stored on a file system, so we need a way to deploy this now inside the runtime. And so there is also a runtime for this. So we have this deploy runtime and so let's try this out. So we see that nothing happens, but it's just because I have commented out a lot of things. So let's basically look at deployment configuration, which is basically a bit more abstract from the configurations that we have, but it shows the same patterns. So for example, now we have an installed package for a backdrop, so if you want to initiate this backdrop, we can do this. So now we have a background image and the next thing is I want to have this Nuke's instance appear in a window. So I start a window manager, which is also now fetched from the depot. So everything here is now basically now coming from these packages. It can be changed at a later point. It does not need to be in the initial boot image. So now we have started a window manager. We don't see anything because we don't have a window. So let's create one and now we have a window. So we have Nuke's instance inside the window in a running window manager. And again, you see that there is a window manager and again, you see that these rules here, these routing rules, they tell the system how to route this resource request. So basically we say, for example, that this Nuke's instance might also access this configuration file system here. So we have to permit this. Or if I want to see the same slides that I have in the boot image here, I can start this also. I have seen if now this slides over here and can browse through them here. The same PDF you are, but now coming from a package. Or I have this small software rendering demo over here that I can launch. Maybe I could do the light central a bit, dim it down a bit so you can see it better. Okay, so you see this here. And also here we can basically change this. We can, for example, say, we want to change the shape to some default or we want to say, we want a shaded way of painting this. So we can basically also modify this depot configuration on the fly and change the system. And then we have this notorious Wi-Fi component. No, the Wi-Fi component is not a problem. It works actually quite well. But just to wrap up the talk, let's start it again. Okay, it's chatty, but it's looking nice. So this is now also a Wi-Fi driver that comes from the packages. So this is not the one that's from the boot image. So let's assume we have flashed this boot image into a firmware. So this is maybe a two-year-old Wi-Fi driver. But we can still download another, a new one into the packages and use them just as a component. And now to wrap things up, let's also start a VM, for example. This is now a virtual box instance. And you see a new window coming up. I can make this a bit more visible. And this is basically the VM that I am using for normal work. I hope that it comes up now here in this demo. It doesn't look smart. So basically you can put in these ten lines of configuration snippet into this deploy tool and you can instance it at such an instance. It looks a bit messy now with the boot up, but eventually it will work here. And here we are. So I can now log in and do some work with Linux. That's basically... And this is now also coming from the installed packages. Okay, so I should wrap up, I think. Five minutes, okay. So this is the current version. And this is the version that we are using in our offices right now. There's still a lot of manual typing, but you can hopefully see the potential behind it that you can basically shape G-Node now this way into a lot of different shapes and forms and do interesting things with this. And for this year, for this roadmap, we envision four steps. The first step is end of February, so in a few weeks. We want to release this version or include it in the official G-Node repository so in the master branch we can basically just use it and we also will publish all the packages that are needed for this kind of thing. So it will be quite easy to build such an image. And I also will write up some kind of documentation for this. So basically the steps that I did right now, I will write up in a free file basically. This is just for the early adopters, for a few people who are basically eager to try this out. And then of course we want to get more and more inviting for end users and one step is sculpt for the curious. So there's a category of users who won't ever bother with installing the G-Node tool chain and learning about all this tooling. People are always saying, I have to read a book to understand it and this is not feasible. This will be a version with an easy image to download so you don't need to compile anything for yourself and you can extend and play with the system in a similar fashion as I did right now. And it will hopefully also be a component with some more polishing, like making the font of the light central adapt to the screen resolution so if you have a higher resolution display like these tiny fonts and so on, these kind of trees. And then there will be the next version in August which will be another step. So here in this step we still torture the user by having to learn Vim. So basically we log out and a lot of people who don't want to learn Vim and here we want to also invite those users by providing a kind of graphical tool. You see this depot configuration. So having a tool that just creates these XML snippets and also monitor some XML reports, this is not so out of question. So this is something that's really within reach now and so we want to do a first experiment with a visual tool where you can plug these components as a kind of boxes together. So visual composition. And then at the end of the year, we want to really leverage the package management which allows basically anyone to participate as a provider of these genote packages and genote subsystems. So someone who wants to, who puts some software can make the software available, just publish the public key and the download location and any other user can basically put this information into a slow machine and download this packages and also use this. So this will be more like a federated way of sculpting. A community experience. That's basically the idea. Okay, so that's basically it. Thanks for your attention and sorry for this glitches for the downloading. Okay. Yeah, thanks. Multiple components modify these configuration files in parallel. Yes, so the question is what happens with multiple components modify the configurations in parallel? The answer is that we try to avoid this because basically we have the same less merge conflicts in git that there would be a good way to automate the resolution of this and for this way we try to avoid the components on doing this as a phy system access. It's more, we try to do this at a level of these ROM modules. So for example, we have a ROM filter which takes some XML information from ROM modules and then use this information to create a new ROM module and so this is more like a transformation of one thing to another and in most cases the two parties that want to modify a configuration they want to modify different aspects of it and what you would do is you would express these special aspects using different formats and then have a ROM filter to merge them with one core version of this. How do you debug something like this? So suppose I'm starting from, suppose I just have a type in my XML like missing, I don't know, closing bracket or something all the way to actually figuring out which snippets of the XML exist in the system. The question is how to debug such a system and I have to talk for two hours to explain this to you. There are so many approaches. No, I don't get me wrong. There are many approaches but just to give you an idea most of the stuff that went into the scope like this whole package management and almost everything I developed under the Linux version of Genote. So Genote also runs on Linux and I can run Genote just as a bunch of user processes on Linux and I can just use GDB for this. So it's very convenient to debug things on this level but of course if you created a rice driver you have to use a real hardware, you can't on a real microkernel so you have a different approach. There's also a GDB monitoring component that you can plug between a subsystem and an underlying system to inspect and debug the component. There are many, many different debugging approaches so that's a question I would elaborate for a long time. Yeah. So I'm very new to microkernels but the security benefits seem very clear and it's nice that everything is isolated. I understand that you have a familiar with Nix and you considered using Nix to the package manager for Genote and I'm very curious to know what motivated you to decide not to follow the function software plugin model in the creation of your own package manager. Yeah. So first we are really thrilled about, the question is I have presented some parts of the package management here and originally like two years ago we started off with looking at Nix, the Nix package manager, especially Emery took a lot of steps to bring these two projects together and we were really excited about Nix at that time because of the approach to roll back the software version to an earlier version and so we learned a lot from Nix. So Nix is like using Git for the whole system and this is just brilliant idea so we are quite excited about this. On the other hand we found that Nix was quite complicated. So in Genote we have the whole stack of the software from the build system over all the tooling, the tool chain, the dynamic linker, the ABI, down to everything under our control so we have more freedom to solve problems and so it does not fit too well with the notion that we have in Nix where we have still as a background the Unix systems so we speak about files and file hierarchies and so in Genote everything are these ROM modules for example and when we build something the result of this build is just a bunch of these ROM modules and so it does not make sense to have a big bureaucracy now around this, we just want to deliver these ROM modules and that's it and so we found that we don't need a domain-specific language, we have a bunch of modules for example a very simple approach. We learned a lot from Nix and we highly appreciated this but in the end we turned to a very much simplified approach that still has some of the benefits of Nix so for example if I update to a new system and if I have a new script version comes out I put it on a new USB stick and I put it, I update the system and I decide to go back to the old version so we can have the speech of Nix that we can have different versions side-by-side does it answer your question? I think so, yes, thank you. Okay, thank you. I have a question, because you said you have a bunch of modules so how do they communicate? because you spoke about the patterns but not about the errors in the patterns so how are the errors? So that depends on the kernel so Genot runs on many different kernels last question, yes? Okay, I have to repeat it so the question is, I have spoken about these errors and these relationships between the components and these sessions but I left out the details about how these communication happens and the answer is that this depends from kernel to kernel so Genot will post several kernels so I just mentioned we can also run it on Linux so when using Genot on Linux we use something like Unix domain sockets to communicate we pass file descriptors over Unix domain sockets for passing capabilities for example but on Nova we use the Nova RPC mechanism so the IPC portal traversal of Nova about the specification of the API that you are using yeah, that's basically it's basically three forms of communication going on just very briefly one is synchronous RPC which is basically a remote procedure call which is synchronous and it's basically like an abstract C++ interface and that's also the way how we express this then there are signals, asynchronous notifications a bit like interrupts, fire and forget events but with no payload and there's chat memory so we can establish a piece of we can basically delegate access to a piece of memory to another component and this other component can make this visible in its address space and so you can exchange data quickly like for the file system or the network communication but if you are interested in the details so there is a book about Genot on the website you can download this so this is described in great detail in the book yes, okay, I will do that because you not really answered what was the concern about that is how to specify the communication not through the mechanisms ah, okay, okay so maybe you can take this offline yeah, okay thank you very much thank you