 Okay, I know it's unmuted. So this is the list of components for the Hello World demo. We basically see a couple of interesting things that will be also shown later. One of them is a so-called net script, which is a Lua script, which will be used by our init process called net, which embeds a Lua interpreter in itself to basically configure the system, spawn new processes, and interconnect them via capabilities. So for the Hello World program, the net script is very simple as you can see. It basically consists of requiring a L4, the L4 Lua package, and then immediately using it, using its functionality to start the ROM, the Hello World program. So when we visualize the situation, there will be a couple of entities. In the running system, there will be the fiasco OC microkernel, on top of which there will be sigma zero process, which is the root pager for the root process. The root process itself called MOW. That's the process which provides the basic L4Primitives, like data spaces, which are memory-like objects that can be mapped into some other address spaces. It also provides us with a couple of other capabilities such as the namespace capability called ROM, which can be further used by the other components to access, for example, the hello.net Lua script. So net uses it to get the source code that it will interpret. All the boot modules also are included in this ROM namespace in form of data spaces. So that's how actually the other tasks can get to their content. And another example of an entity provided by MOW is the default plug, which is the default output. If we don't provide anything else, that's what all the components will be using for printing out stuff. And the alpha re-component, that's a binary that will be mapped in every tasks address space and contains the core alpha re-functionality. So now that's time for this short demo. Hopefully you can see the size of the letters is okay for you. So everything is already pre-built. This is just the way how we run it in QMU. So we can notice that some lines were printed by MOW, then one line printed or two lines printed by net, and then we have this hello world application, which just in a loop prints hello world. So this is a very basic example, just to illustrate how that would be done. Now we will make a little bit of a jump to something which is much more complex, such as running a Linux VM on top of alpha re. So this time our modules list, we will take hello binary away and we will add some new binaries. This time it will be the UVMM binary, which is the user space virtual machine monitor used in alpha re systems, the UVMM net script, and also a binary device tree, which will be used by the Linux guest and a RAM disk and also our Linux kernel image. The net script that we will use this time is also much more complicated than the hello world example. Later we will see how to get rid of some of the complexity and redundancy in these scripts, but this is just for illustrational purposes. So instead of hello, the script is spawning a new binary called UVMM. It's passing it some arguments such as the device tree and the RAM disk and Linux kernel image and some Linux boot command line. And besides of that, which is new to this example, is that we are passing it the lock capability from the initial environment of net and giving it to this new list point tasks so that it will be also able to use it. And another one is the RAM capability, which will be known under the string RAM inside of the new list point task, which is a data space provided by Moe and which will be basically used by the guest as its RAM. When visualized, it will look like this. We see that also the UVMM.net is another data space in this example and the RAM data space is figuring also on the picture as well as the log capability. And also the same for the device tree and all the images that the guest will be using. The picture is made this way to signify that the unmodified Linux will be running on top of the user space virtual machine monitor. So let's see how this work when we run it as a demo. So this time it's called UVMM. So here we see how Moe and net are starting that we saw a couple of lines from VMM, which is the virtual machine monitor. And now this is already Linux command line so we can for instance communicate with it and see that it's Linux 4.19.8. If we for example query the local time, we see that it's, we are back in 1970s. That's because there is no, at this point of the presentation, there is no hardware real time clock passed to the guest. So we see that we haven't configured any network interfaces yet. So that's running a single Linux VM. How about if we wanted to actually run two of them? Now we will face a problem. What to do with the input output of both VMs, how they will mix or how to prevent them from mixing so that we can't tell one from another one. And the answer is we will need to use another component from L4E which is called cons, which is the console multiplexer. And as another improvement to this example, we will no longer write very long and complex net scripts, but we will use this VMM.lua net script which kind of wraps all this functionality that we need and allows us to configure everything in a much concise way. So besides the L4 package, we can import the VMM lua package into our net script and then use it. For instance, we can define a lua function called function VM around its start VM method and just give it some arguments such as the number of the VM that we are starting and some network capability and some boot arguments and also capability for accessing the IO devices. And besides of that, we also just pay attention to the line that I'm pointing at. We can also create something which is called a factory object. So we basically create a new IPC communication endpoint which we call like a lock factory which will be used to create a new instances of the console that's depicted here. So the console multiplexer uses a pattern which is very common to all this L4RE configuration and L4RE components, which is the factory pattern. So using the variable, the IPC channel which is in fact a kernel object called IPC gate, we are going to create a capability, a new IPC gate and pass it to the cons process as a capability known under the cons string to it. And then in the vmm.lua script, there is code which uses this capability to create a new multiplexer and the new multiplexer is basically a new IPC gate object which is then passed to the lock capability of the UVMM process. And when we have all that, spawning new instances is as easy as just typing vm1 and vm2 and that will do what we need. On the picture, it will look like this. What is interesting about this is that while we provide two new consoles to the two VMs, the tasks such as cons itself and net will continue to use the default lock capability, but can be of course told to be using some other output if we provided some other capability which speaks the same protocol. So let's see what that looks like. That's not it, I'm missing game. So now we already see that we have two VMs and that the UVMM is logging some information for both of them. Now, when everything is booted, we happen to find ourselves in the prompt of the console multiplexer. So we are not anymore in the guest but we can now list the attached consoles. We have one for vm1, one for vm2. Now, we can also connect to either of them and basically interact with both of them. So that was us running two Linux VMs on top of L3, but that's not very interesting, right? Because both of them are isolated and we're not even interconnected. So that's what we are going to do now. We are going to provide some kind of network connection so that you can actually ping one VM and another one. This change requires us to add into the list of modules process called L4 vio.net P2P, which is basically a vertio switch which has only, in this case, it will have only two ports and basically is an equivalent of a virtual crossed UTP wire. So again, in this example, we will see the factory pattern used. This time, it will be when we create the vertio switch. So the vmm.lua wrapper provides us with a function called start vertio switch. We just need to pass it a table which says what are our switch ports called. And then we create a new IPC channel, call it switch, and pass it in its server form to the vertio switch process and it will be known as the SVR capability in it. And at the same time, we will modify the ports table which was passed to us as an argument and we'll use the switch capability to create new vertio net ports. That will, of course, be known as the keys, like the names of those capabilities will be the keys passed in the argument, in the ports argument. Then this is how the start vm.lua function looks like. So we pass it some table with some keys and one of them is hopefully net which is this capability created by the vm.start switch. And here we can see how this capability is actually passed to the UVMM process. And also remember this VBus line as we will be needing, as we will need that a little bit later when we do hardware pass through to the device. So our final net script which includes the import of the vmm.lua script can look something like this. Our net under bar ports table is what we pass to start vertio switch wrapper. And as we've just seen, it will create the factory capability and also our ports capabilities and will populate the net ports table with them. And it's just, we can just kind of start each vm by passing it its own net vertio capability, its port capability. There is one thing that we need to do. We need to add one extra device tree node into the device tree that we are passing to the guest so that the guest can actually connect using a device called vertio proxy to the provided vertio switch port. So that's why the net here is highlighted and is basically the same net as we saw on the previous slides. This picture shows the whole situation. The L4 vio net P2P is located between the two VMs to signify that it's basically just a crossed UTP cable with the factory capability as we are and both vertio net ports shown. So let's see how that works in a demo. You can notice that there are some warnings or still some warnings about some failure to find some devices that has to do with the fact that we still haven't passed these devices to one of the VMs yet, but we will fix that later. So right now we are back to the console, multiplexer console. We can list all consoles. We see that there is one new console created called switch. We can for example kind of see what the switch printed out when it was booting. So for example, here we can see how both of the clients registered with the switch and how the initialization proceeded. But more importantly, we now should see that VM number one has its interface configured and also that VM two has its interface for network connections and we can ping for example from VM two, I don't know, did that same. We can ping the first one. So they can communicate over the vertio provided network. So this is much better than just having two VMs that are not interconnected, but still you can't really do much except for a mutual communication between the two VMs that are still isolated from the outside world. Like if you remember when I typed the date command it showed some artificial time and not the real time. So for the purposes of this demo, I chose to pass a simple device which is the real time clock PL061 or 39, forgot the number, but it's one of these ARM devices. And our goal will be to pass it to VM one so that it unlike VM two will see the real time and we have approved that the VMs can communicate with the outside world. Normally this is kind of artificial example because what you would normally want to do you would want to give at least one of the VMs a real Nick card for example, so that you have network connection to the outside world and then something might be going on between the two. But because this is 64 bit ARM running in QMU and I was doing the demo, it's slightly problematic. I would probably have to pass the whole PCI controller to it. There might be some issues. So I chose this simpler version. So what we need to do in order to do a hardware pass through to a VM which is running under L4RE, we basically need to start another L4RE component called IO which is the IO manager. We need to pass IO its own Lua configuration in which we will tell it basically what real hardware it has. And then we also need to provide a Lua configuration file in which we specify which real hardware is actually passed to the guest and that's not all. We will also need to alter the device tree which is passed to the VM and tell it about the device that it should look for. So in this case, it's that PL031, not 61, but PL031, the real time clock device specified what interrupt it uses, what registers it uses, what is it known as. And then also if the device needs some other components from the device tree, we also need to provide them such as this clock node, which is necessary, otherwise Linux wouldn't find it. So our net script, if we keep all the net configuration stuff in, we just need to add another Lua table this time called IO buses in which we specify basically keys that will later function as capability names that are passed to the VM guest, to the VM. So when we have our table with the IO buses, we just call the VMM.Lua function called start under bar IO, pass the table to it and also specify the location of the IO configuration Lua script. Then we must not forget to also specify the name of the IO bus to the VM, which we want to have access to this bus. This start IO function, it does something similar to what we've already seen in the previous cases. So it basically goes through the table which is passed to it and creates a capability. The server part of it, it passes to IO and IO will know this by the way, by the key which kind of features in the table and the client part of it will be passed to a UVMM or to whatever client we have here. Also very important part of this is that we need to start the IO manager with some additional rights that the other tasks do not have. So unlike in the previous cases, we need to give it access to the Sigma zero process because Sigma zero, among other things, initially owns all the memory and also all the IO ports. So we need to create a new capability which will allow IO to access that. And also we will need to deal with interrupts. So we need to provide an interrupt controller unit capability, which in this case as we will see is provided by the kernel. So the IO config which is passed to IO looks like this. We basically create a new device called RTC and very similar to how it is in the device tree which is given to the Linux VM. We also specify the compatible property and also say what IO registers and interrupts the device is using. So when we do this, then IO will request this device from Sigma zero and the think missing is assigning or giving access to these devices to the actual UVMM or to the actual VM. And in order to do that, IO creates for us a new virtual bus, which in this case will be known under the name VM under bar HW. Then note the correspondence with the name that we used in that Lua table to configure this virtual bus. And we'll basically assign the hardware device to it. And there will be some other notes or some other devices on this virtual bus such as a virtual interrupt control unit that the client will be able to enumerate and start handling. So we can see all of that on this slide including the Sigma zero and ICU capabilities and also how IO provides the virtual bus object via the VM under bar HW capability which is basically the same thing the server side of the VBUS capability provided to the UVMM. We provide it only to one of the UVMMs or to one of the VMs because we don't want the other one to have access to it. And if we now try to run it in the demo, we now see there is slightly more output from some new components in this case from IO than before. So again, finding ourselves in the console multiplexer, we see that now there is yet another console for IO so we can also take a look at what IO said when it was starting this corresponds to all that red output that we saw. So we can basically see how it got access to the real hardware device and how it created the capability for the VBUS and also how it then tried to connect the IRQ and that we ask it to do for us. But now what's more interesting for us is we can grab the output of the DMessage program for PL0 and this time we shall see that the guest actually found the real-time clock device that we passed to it. So unlike in the previous cases and also in the case of VM2, date should be functioning properly. So we can also see that VM2 still thinks we are in 1970. There is one more thing to show in VM1. If we now go into the slash says directory, we can find the device tree structure there and we can clearly see that our PL031 is there and if we did this on VM2, it wouldn't be there. So that's for the demos. Now I basically deliver this presentation because over the last year there has been effort to move some of the L4RE components on GitHub and previously they had been available in so-called snapshots which are still available. So for example, if you wanted to recreate some of the demos that I was showing, components available on GitHub will not be sufficient because for instance the console multiplexer and the vertio switch are not on GitHub yet but you can go to L4RE.org download HTML and from there download a snapshot and copy the respective missing packages over to your checked source tree and it will just build fine and you can use them as I did. In fact that's what I've done. You will probably also need a tutorial. So there is a tutorial which was used for creating the QMU Advent calendar kind of entry that we had with the Christmas tree and if you follow that you should be able to also run unmodified Linux VM on top of UVMN on 64-bit ARM in QMU. And the takeaway that I would like you to have is that it's all about components that are mutually interconnected by capabilities and the way they are interconnected happens or is via a law and that there is a heavy use of vertio components and vertio stuff. With that I would like to thank you.