 First talk for today is by Hannes Maynard. It will be titled Leaving Legacy Behind. It's about reduction of carbon footprint through micro kernels in the Mirage OS. Give a warm welcome to Hannes. Thank you. So let's talk a bit about legacy. So legacy we have nowadays we run services usually on a UNIX-based operating system, which is demonstrated here on the left a bit, the layer ring. So at the lowest layer we have the hardware, so some physical CPUs, some block devices, maybe a network interface card, and maybe some memory, some non-persistent memory. On top of that we usually run the UNIX kernel, so to say, that is marked here in brown, which consists of a file system. Then it has a scheduler, it has some process management, it has a network stack, so a tcpip stack. It also has some user management and hardware and drivers, so it has drivers for the physical hard for the network interface and so on. The brown stuff, so the kernel runs in privilege mode. It exposes a system call API and or a socket API to the actual application. We are there to run, which is here in orange. So the actual application is on top, which is the application binary. It may depend on some configuration files distributed randomly across the file system with some file permissions and so on. Then the application itself also depends likely on a programming run time that may either be a Java virtual machine, if you run Java or a Python interpreter, if you run Python or Ruby interpreter, if you run Ruby and so on. Then additionally, we usually have a system library, libc, which is the runtime library basically of the C programming language, and it exposes a much nicer interface than the system calls. The as well may have an open SSL or another crypto library as part of the application binary, which is also here in orange. So what's the job of the kernel? So the brown stuff actually has a virtual memory subsystem, and it should separate the orange stuff from each other. So you have multiple applications running there and the brown stuff is responsible to ensure that the different pieces of orange stuff don't interfere with each other, so that they are not randomly writing into each other's memory and so on. Now if the orange stuff is compromised, so if you have some attacker from the network or from wherever else, who's able to find a flaw in the orange stuff, the kernel is still responsible for strict isolation between the orange stuff, so as long as the attacker only gets access to the orange stuff, it should be very well contained. But then we look at the bridge between the brown and orange stuff, so between kernel and user space, and there we have an API which is roughly 600 system calls, at least on my free VC machine here in Cisco. So it's 600 different functions, or the width of this API is 600 different functions. Which is quite big, and it's quite easy to hide some flaws in there, and as soon as you're able to find a flaw in any of those system calls, you can escalate your privileges. And then you basically run in the brown mode, so in kernel mode, and you have access to the raw physical hardware, and you can also read arbitrary memory from any process running there. So now over the years, it actually evolved, and we added some more layers, which is hypervisors, so at the lowest layer we still have the hardware staying, but on top of the hardware we now have hypervisor, which responsibility is to split the physical hardware into pieces and slice it up and run different virtual machines. So now we have the white stuff, which is the hypervisor, and on top of that we have multiple brown things, and multiple orange things as well. So now the hypervisor is responsible for distributing the CPUs to virtual machines and the memory to virtual machines and so on. It is also responsible for selecting which virtual machine to run on which physical CPUs, so it actually includes a scheduler as well. And the hypervisor's responsibility is again to isolate the different virtual machines from each other. Initially hypervisors were done mostly in software, nowadays there are a lot of CPU features available, which allows you to have some CPU support, which makes them fast and you don't have to trust so much software anymore, but you have to trust in the hardware. So that's extended page tables and VTD and VTX stuff. Okay, so that's the legacy we have right now. So when you ship a binary, you actually care about some tip of the iceberg. That is the code you actually write and you care about. You care about deeply because it should work well and you want to run it. But at the bottom you have the solar operating system and that is the code the operating system insists that you need it. So you can't get it without the bottom of the iceberg. So you will always have a process management and user management and likely as well the file system around on the Unix system. Then in addition back in May, I think there was a block entry from someone who analyzed from Google Project Zero, which is a security research team, a red team, which tries to find a lot of flaws in widely used applications and they found in a year maybe 110 different vulnerabilities, which they reported and so on, and someone analyzed what are these 110 vulnerabilities about and it turned out that more than two-thirds of them, that the root cause of the flaw was memory corruption. A memory corruption means arbitrary reads of writes from arbitrary memory, which the process is not supposed to be in. So why does that happen? That happens because on the Unix system we mainly use program languages where we have tight control over the memory management. So we do it ourselves. So we allocate the memory ourselves and we free it ourselves. That is a lot of boilerplate we need to write down and that is also a lot of boilerplate, which you can get wrong. So now we talked a bit about legacy. Let's talk about the goals of this talk. The goals is on the one side to be more secure, so to reduce the attack vectors, because C and languages like that are from the 70s and we may have some languages from the 80s or even from the 90s who offer you automated memory management and memory safety. Languages such as Java or Rust or Python or something like that. But it turns out not many people are writing operating systems in those languages. Another point here is I want to reduce the attack surface. So we have seen this huge stack here and I want to minimize the orange and the brown part. Then as an implication of that, I also want to reduce the runtime complexity because it is actually pretty cumbersome to figure out what is now wrong. Why does your application not start? And if the whole reason is because some file on your hard disk has the wrong file system permissions, that is pretty hard to get across if you are not yet a UNIX expert who has lived in the system for years or at least months. And then the final goal thanks to the topic of this conference and to some analysis I did is to actually reduce the carbon footprint. So if you run a service, you certainly that service does some computation and this computation takes some CPU ticks. So it takes some CPU time in order to be evaluated. And now reducing that means if we condense down the complexity and the code size, we also reduce the amount of computation which needs to be done. These are the goals. What is a Mirage as Unical? That is basically the project I've been involved in since six years or so. The general idea is that each service is isolated in a separate Mirage as Unical. So your DNS resolver or your web server don't run on this general purpose UNIX system as a process, but you have a separate virtual machine for each of them. So you have one Unical which only does a DNS resolution. And in that Unical you don't even need a user management. You don't even need process management because there's only a single process. There's a DNS resolver. Actually a DNS resolver also doesn't really need a file system, so we got rid of that. We also don't really need virtual memory because we only have one process, so we don't need virtual memory. And we just use a single address space. So everything is mapped in a single address space. We use a program language called OCaml, which is a functional programming language, which provides us with memory safety. So it has automated memory management. And we use this memory management and the isolation which the program manager guarantees us with by its type system. We use that to say, okay, we can all live in a single address space and it will still be safe as long as the components are safe and as long as we minimize the components which are by definition unsafe. So if we need to run some C code there as well. So in addition, now if we have a single service, we only put in the libraries what the stuff we actually need in that service. So as I mentioned, the DNS resolver won't need a user management. It doesn't need a shell. Why would I need a shell? What should I need to do there and so on. So we have a lot of libraries, a lot of OCaml libraries which are picked by the single service or which are mixed and matched for the different services. So the libraries are developed independently of the whole system or of the unicolonel and are reused across the different components or across the different services. Some further limitation which I take as freedom and simplicity is not even we have a single address space. We are also only focusing on single core and have a single process. So we don't have a process. We don't know the concept of process yet. We also don't work in a preemptive way. So preemptive means that if you run on a CPU as a function or as a program, you can at any time be interrupted because something who's much more important than you can now get access to the CPU. And we don't do that. We do cooperative tasks. So we are never interrupted. We don't even have interrupts. So there are no interrupts. And as I mentioned, it's executed as a virtual machine. So how does that look like? So now we have the same picture as previously. We have at the bottom the hypervisor. Then we have the host system, which is the brownish stuff. Then on top of that, we have maybe some virtual machines. Some of them run via KVM and QEMU Yonex system using some VRTIO that is on the right and then on the left. And in the middle, we have this Mirage as unicolonels where we in the host system don't run any QMU, but we run a minimized so-called Tender, which is this SolO5 HVT monitor process. So that's something which just tries to allocate or will allocate some host system resources for the virtual machine and then does interaction with the virtual machine. So what does SolO5 HVT do in this case is to set up the memory, load the unicolonel image, which is a statically linked else binary, and it sets up the virtual CPU. So the CPU needs some initialization and then booting is jumped to an address. It's already in 64-bit mode. There's no need to boot via 16 or 32-bit modes. Now SolO5 HVT and the MirageOS, they also have an interface. And the interface is called hypercalls and that interface is rather small. So it only contains in total 14 different functions, which is main function yield, way to get the argument vector clock. Actually two clocks, one is POSIX clock, which takes care of this whole time stamping and time zone business. And another one is a monotonic clock, which by its name guarantees that time will pass monotonically. Then you have a console interface. The console interface is only one way, so we only output data, we never read from console. A block device, well, block devices and network interfaces. And that's all the hypercalls we have. To look a bit further down into detail of how MirageOS unicolonel looks like, here I pictured on the left again the tender at the bottom and then the hypercalls. And then in pink I have the pieces of code which still contain some C code in a MirageOS unicolonel. And then green I have the pieces of code which do not include any C code, but only ORCAMR code. So looking at the C code, which is dangerous, because in C we have to deal with memory management on our own, which means it's a bit brittle. We need to carefully review that code. It is definitely the ORCAMR runtime which we have here which is around 25,000 lines of code. Then we have a library which is called Nolib C. It is basically a C library which implements malloc and shrink compare and some basic functions which are needed by the ORCAMR runtime. That's roughly 8,000 lines of code. That Nolib C also provides a lot of stubs which just exit or return null for the ORCAMR runtime because we need what we use an unmodified ORCAMR runtime to be able to upgrade our software more easily if we don't have any patches for the ORCAMR runtime. Then we have a library called sort of five bindings which is basically something which translates into hypercodes or which can access the hypercodes and which communicates with the host system via hypercodes. That is roughly 2,000 lines of code. Then we have a math library for signers and co-signers and tangents and so on, and that is just the open libm, which is originally from the freebc project and roughly 20,000 lines of code. That's it. I talked a bit about sort of five, about the bottom layer, and I will give a bit more into detail about the sort of five stuff which is really the stuff you run at the bottom of MirageOS. There's another choice. You can also run Xen or QubesOS at the bottom of a MirageOS Unigodal, but I'm focusing here mainly on sort of five. So sort of five is a sandbox execution environment for Unicunnels. It handles resources from the host system, but only aesthetically. So you say at startup time how much memory it will take, how many network interfaces and which ones are taken, and how many block devices and which ones are taken by the virtual machine. You don't have any dynamic resource management, so you can't add at a later point in time at a new network interface that's just not supported. Then, and it makes the code much easier. We don't even have dynamic allocation inside of sort of five. Then we have a hyper call interface. As I mentioned, it's only 14 functions. We have bindings for different targets. So we can run on KBM, which is a hypervisor developed for the Linux project. But also for Beehive, which is a FreeBSD hypervisor or VMM, which is an OpenBSD hypervisor. We also target other systems such as Genote, which is an operating system based on a microkernel written mainly in C++. Vert.io, which is a protocol usually spoken between the host system and the guest system. And Vert.io is used in a lot of cloud deployments. So it's okay. So QAMO, for example, provides you with a Vert.io protocol implementation. And the last implementation of Solify for bindings for Solify is SecComp. So Linux SecComp is a filter in the Linux kernel where you can restrict your process that will only use a certain number or a certain amount of system calls. And we use SecComp, so you can deploy it without a virtual machine in the SecComp case, but you are restricted to which system calls you can use. So Solify also provides you with a host system tender, very applicable. So in the Vert.io case, it is not applicable in the Genote case. It is also not applicable. In KVM, we already saw the Solify HBT, that is a hardware virtualized tender, which is just a small binary because if you run few emits, yeah, at least hundreds of thousands of lines of code in the Solify HBT case, it's more like thousands of lines of code. So here we have a comparison from left to right of Solify and how the host system or the host system kernel and the guest system works. In the middle, we have a virtual machine, so a common Linux QEMU KVM-based virtual machine, for example. And on the right hand, we have the host system and the container. Container is also a technology where you try to restrict as much access as you can from a process. So it is contained and the potential compromises also very isolated and contained. So on the left hand side, we see the Solify is basically some bits and pieces are in the host system, so the Solify HBT, and then some bits and pieces are in the unicode. So that is the Solify findings I mentioned earlier, and that is to communicate between the host and the guest system. In the middle, we see that the API between the host system and the virtual machine is much bigger. That is commonly using VITIO, and VITIO is really a huge protocol which does feature negotiation and all sorts of things where you can always do something wrong, like you can do something wrong in the floppy disk driver and that led to some exploitable vulnerability, although nowadays most operating systems don't really need a floppy disk drive anymore. And on the right hand side, you can see that the host system interface for a container is much bigger than for a virtual machine because the host system interface for a container is exactly those system calls you saw earlier. So it's around 600 different calls and in order to evaluate the security, you need basically to order all of them. So that's just a brief comparison between those. If we look into more detail what Solify, what shades it can have here on the left side, we can see it running in a hardware virtualized tender which is you have the Linux 3VCO OpenBSD at the bottom and you have a Solify blob which is a blue thing here in the middle and then on top you have the unicorn. On the right hand side, you see the Linux second process and you have a much smaller Solify blob because it doesn't need to do that much anymore because all the hypercalls are basically translated to the system calls so you actually get rid of them and you don't need to communicate between the host and the guest system because in SecComp you run as a host system process so you don't have this virtualization. The advantage of using SecComp is as well that you can deploy it without having access to virtualization features of the CPU. Now to get it in even a smaller shape, there's another backend I haven't talked to you about. It's called the Muen. It's a separation kernel developed in Ada so you basically, so now we try to get rid of this huge Linux system below it which is a big kernel thingy here and Muen is an open source project developed in Switzerland. Ada as I mentioned and it uses Spark which is proof system which guarantees then memory isolation between the different components and Muen now goes a step further and it says, oh yeah, for you as a guest system you don't do static allocations and you don't do dynamic resource management. We as a host system, we as a hypervisor, we don't do any dynamic resource allocation as well. So it only does static resource management so at compile time of your Muen separation kernel you decide how many virtual machines or how many unicornals you are running and which resources are given to them. You even specify which communication channels are there so if one of your virtual machines needs to talk to another one you need to specify that at compile time and at runtime you don't have any dynamic resource management so that again makes the code much easier, much less complex and you get to much fewer lines of code. So to conclude with this Mirage and how this and also Muen and sort of five and how that is, I like to cite Antoine about perfection is achieved not when there's nothing more to add but when there's nothing left to take away. I mean obviously the most secure system is a system which doesn't exist. Let's look a bit further into the decisions of Mirage as on why do you use this strange programming language called OCaml and what it's all about and what are the case studies. So OCaml has been around since more than 20 years it's a multi-paradigm programming language. The goal for us and for OCaml is usually to have declarative code. To achieve declarative code you need to provide the developers with some orthogonal abstraction facilities such as here we have variables and functions you likely know if you're a software developer. Also higher order functions so that just means that a function is able to take a function as input. Then in OCaml we try to always focus on the problem and do not distract with boilerplate. So some running example again would be this memory management. We don't manually deal with that but we have computers who actually deal with that. In OCaml you have a very expressive and static type system which can spot a lot of invariance or violation of invariance at build time. So the program won't compile if you don't handle all the potential return types or return values of your function. So now a type system you may know it from Java it's a bit painful if you have to express at every location where you want to have a variable which type this variable is. What OCaml provides is type inference similar to Scala and other languages so you don't need to type all the types manually. And types are also unlike in Java types are erased during compilations. So types are only information about values the compiler has at compile time but at runtime these are all erased so they don't exist you don't see them. And OCaml compiles the native machine code which I think is important for security and performance because otherwise you run an interpreter or an abstract machine and you have to emulate something else and that is never as fast as you can. OCaml has one distinct feature which is its module system so you have all your values which are types or functions and now each of those values is defined inside of a so-called module and the simplest module is just a filename but you can nest modules so you can explicitly say oh yeah this value or this binding is now living in a submodule hereof. So each module you can also give it a type so it has a set of types and a set of functions and that is called its signature which is the interface of the module. Now you have another abstraction mechanism in OCaml which is Functors and Functors are basically compile time functions from module to module so they allow parametrization like you can implement your generic map structure and all you require so map is just a hash map where you or a map implementation is maybe a binary tree and you or you need to have is some comparison for the keys and that is modeled in OCaml by a module so you have a module called map and you have a functor called make and the make takes some module which implements this comparison method and then provides you with a map data structure for that key type and then Mirafit as we actually use a module system quite a bit more because we have all these resources which are different between Xen and KVM and so on so each of the different resources like a network interface has a signature okay and target specific implementation so we have so the TCP IP stack which is much higher than the network card it doesn't really care if you run on Xen or if you run on KVM you just program against this abstract interface against the interface of the network device but you don't need to program you don't need to write in your TCP IP stack any code to run on Xen or to run on KVM so Mirafit also doesn't really use the complete OCaml programming language OCaml also provides you with an object system and we barely use that we also in Mirafit as well OCaml also allows you with mutable states and we barely use that mutable state but we use mostly immutable data whenever sensible we also have a value passing style so we put state and data as input so state is just some abstract state and data is just a byte vector in a protocol implementation and then the output is also a new state which may be modified and some reply maybe so some other byte vector or some application data or the output may as well be an error because the incoming data and state may be invalid or may violate some constraints and errors are also explicitly typed so they are declared in the API and the caller of a function needs to handle all these errors explicitly as I said, single core but we have some promise-based or some event-based concurrent programming stuff and here we have the ability to express really strong invariants like this is a read-only buffer in the type system and the type system is as I mentioned only compile time, no runtime overhead so it's all pretty nice and good so let's take a look at some of the case studies the first one is unicolonial so it's called the Bitcoin Piniata it started in 2015 when we were happy with the from scratch developed TLS stack TLS is transport layer security so what do you use if you browse to HGPS so we have an TLS stack and OCaml and we wanted to do some marketing for that Bitcoin Piniata is basically unicolonial which uses TLS and provides you with TLS endpoints and it contains the private key for a Bitcoin wallet which is filled with which used to be filled with 10 bitcoins and this means it's a security base so if you can compromise the system itself you get the private key and you can do whatever you want with it and being on this Bitcoin blockchain it also means it's transparent so everyone can see whether it has been hacked or not and it has been online since three years and it was not hacked but the Bitcoin we got were only borrowed from friends of us and they were then reused in other projects it's still online and you can see here on the right that we had some HTTP traffic like an aggregate of maybe 600,000 hits there now I have a size comparison of the Bitcoin Piniata on the left you can see the unicolonial which is less than 10 megabytes in size or in source code it's maybe 100,000 lines of code and on the right hand side you have a very similar thing but running as a Linux service so it runs an OpenSSLS server which is a minimal TLS server you can get basically on a Linux system using OpenSSLS and there we have mainly maybe a size of 200 megabytes and maybe two millions lines of code so that's roughly a factor of 25 and other examples we even got a bit less code much bigger factor performance analysis I showed, well also in 2015 we did some evaluation of our TLS stack and it turns out we are in the same ballpark as other implementations another case study is a Kaldaf server which we developed last year with a grant from Trotsitefine which is a German government's funding it is interoperable with other clients it stores data in a remote Git repository so we don't use any block device or persistent storage but we store it in a Git repository so whenever you add the calendar event it does actually a Git push and yeah we also recently got some integration with Kaldaf ZEP which is a javascript user interface doing a javascript doing a user interface and we just bundle that with the thing it's online open source there's a demo server and the data repository online yes some statistics and I zoom in directly to the CPU usage so we had the luck that we for half of a month we used it as a process on a 3bc system and that happened roughly the first half until here and then at some point we thought oh yeah let's migrate it to a mirage as unical and don't run the 3bc system below it and you can see here on the x-axis the time so there is a month of June starting with the first of June on the left and the last of June on the right and on the y-axis you have the number of CPU seconds here on the left or the number of CPU ticks here on the right the CPU ticks are virtual CPU ticks which are deeper counters from the hypervisors so from beehive and 3bc here in that system and what you can see here is this massive drop by a factor of roughly 10 and that is when we switched from a UNIX virtual machine with a process to a freestanding unical so we actually use much less resources and if we look into the bigger picture here we also see that the memory dropped by a factor of 10 or even more this is now logarithmic scale here on the y-axis the network bandwidth increased quite a bit because now we do all the monitoring traffic also via network interface and so on okay that's Galdav another case study is authoritative DNS service and I just recently wrote a tutorial on that I will skip because I'm a bit short on time another case study is firewall for kubesu s so kubesu s is a reasonable secure operating system which uses XEN for isolation of work spaces and applications such as PDF reader so whenever you receive a PDF you start your virtual machine which is only run once and you well which is just run to open and read your PDF and kubesu mirage firewall is now a small or a tiny replacement for the nukespace firewall written in okaml now and instead of roughly 300 megabytes you only use 32 megabytes of memory there's now also recently some support for dynamic firewall rules as defined by a kubesu 4.0 that is not yet merged into master but it's under review libraries and mirage as yeah we have since we write everything from scratch and in okaml we don't have now we don't have every protocol but we have quite a few protocols there are also more unicorns right now which you can see here and the slides are also online in the file plan so you can click on the links later reproducible builds so for security purposes we don't yet ship binaries but I plan to ship binaries and in order to ship binaries I don't want to ship non reproducible binaries what does reproducible builds well it means that if you have the same source code you should get the binary identical output and issues are temporary file names and timestamps and so on in December we managed in mirage as to get some tooling on track to actually test the reproducibility of unicorns and we fixed some issues and now all the tested mirage as unicorns are reproducible which are basically most of them from this list another topic is supply chain security which is important I think and we have this is still work in progress we still haven't deployed that widely but there are some test repositories out there to provide more to provide signatures signed by the actual author of a library and getting across until the user of the library can verify that and some decentralized authorization and delegation of that what about deployment in conventional orchestration systems such as Kubernetes and so on we don't yet have a proper integration of mirage as but we would like to get some proper integration there if you already generate some libvert.xml files from mirage so for each unicorn you get the libvert.xml and you can do that and run that in your libvert-based orchestration system for Xen we also generate those .xl and .xe files which I personally don't really know much about but that's it on the other side I developed an orchestration system called albatross because I was a bit very if and now have those tiny unicorns which are megabytes in size and now I should trust the big Kubernetes which is maybe a million lines of code running on the host system with privileges so I thought oh well let's try to come up with a minimal orchestration system which allows me some console access so I want to see the debug messages or whenever it fails to boot I want to see the output of the console want to get some metrics like the grafana screenshot you just saw and that's basically it then since I developed also a TLS stack I thought oh yeah well why not just use it for remote deployment so in TLS you have mutual authentication you can have client certificates and certificate itself is more or less an authenticated key value store because you have those extensions in X509 version 3 and you can put arbitrary data in there with keys being so-called object identifiers and values being whatever else TLS certificates have this great advantage that or X509 certificates have the advantage that during a TLS handshake they are transferred on the wire in not base 64 or payment coding as you usually see them but in basic encoding which is much nicer to the amount of bits you transfer so it's not transferred in base 64 but directly in raw basically and with albatross you can basically do a TLS handshake and in that client certificate you present you already have the unicornal image and the name and the boot arguments and you just deploy it directly you can also in X509 you have a chain of certificate authorities which you send with and this chain of certificate authorities also contains some extensions in order to specify which policies are active so how many virtual machines are you able to deploy on my system how much memory you have access to and which bridges or which network interfaces you have access to so albatross is really a minimal orchestration system running as a family of UNIX processes it's maybe 3,000 lines of code or so or chemical code but using then TLS tag and so on but it's yeah it seems to work pretty well I at least use it for more than two dozen unicornals at any point in time what about the community well the whole mirage project started around 2008 at the University of Cambridge so it used to be a research project with which still has a lot of ongoing student projects at the University of Cambridge but now it's an open source permissively licensed mostly BSD licensed thing where we have community events every half a year in retreat in Morocco where we also use our own unicornals like the DHCP server and the DNS resolver and so on we just use them to test them and to see how does it behave and does it work for us we have quite a lot of open source computers from all over and some of the Mirage as libraries have also been used or are still used in this docker technology docker for Mac and docker for Windows which emulates the guest system where we need some wrappers and there are a lot of all chemical code is used so to finish my talk I would like to have another slide which is that Rome wasn't built in a day so where we are is to conclude here we have a radical approach to operating system development we have security from the grounds up with much fewer code and we also have much fewer attack vectors because we use the memory safe language we have reduced carbon footprint as I mentioned in the start of the talk because we use much less CPU time but also much less memory so we use less resources it's Mirage as itself and OCaml has a reasonable performance we have seen some statistics about the TLS stack that it was in the same ballpark as OpenSSL and PolarSSL which is nowadays embedded in TLS and Mirage unicomals since they don't really need to negotiate features and wait for the SCSI bus and so on they actually boot in milliseconds not in seconds so they do not have a probing and so on but they know at startup time what they expect I would like to thank everybody who is and was involved in the solar technology stack because I myself I program quite a bit of OCaml but I wouldn't have been able to do that on my own it is just a bit too big Mirage has currently spent around maybe 200 different Git repositories with the libraries mostly developed on GitHub and OpenSource I'm at the moment working on a non-profit company in Germany which is called the Center for the Cultivation of Technology with a project called Roblox so we work in a collective way to develop full stack Mirage as unicomals that's why I'm happy to do that from Berlin and if you're interested please talk to us I've some selected the related talks there are much more talks about Mirage but here's just a short list of something if you're interested in some certain aspects please help yourself to view them that's all from me thank you very much there's a bit over 10 minutes of time for questions if you have any questions walk through the microphone there's several ones around the room go ahead and thank you very much for the talk oh by the way word of order thanking the speaker can be done afterwards and questions are questions so short sentences and then we have a question mark sorry do go ahead if I want to try this at home what do I need is a Raspi sufficient no it isn't that is an excellent question so I usually develop it on a such a think that machine but we actually support also arm 64 mode so if you have a Raspberry Pi 3 plus which I think has the virtualization bits and the Linux kernel which is reason enough to support KVM on that Raspberry Pi 3 plus then you can try it out there okay next question well currently most Mirage OS Unicernals are used for running server yeah server applications and so obviously this all this static preconfiguration of OCaml and maybe Ada Spark is fine for that but what do you think about will it ever be possible to use the same approach with all this static preconfiguration for these very dynamic end user desktop systems for example like which at least currently use quite a lot of plug and play do you have an example what you are thinking about well I'm not that much into the topic of Ada Spark stuff but you said that all the communications paths have to be defined in advance so especially with plug and play devices like all this USB stuff we either have to allow everything in advance or we yeah may have to reboot parts of parts of Unicernals in between to allow rerouting stuff that's how I would understand it yes so I mean if you want to design a USB plug and play system you can think of it as you plug in somewhere the USB stick and then you start a Unicernal which only has access to that USB stick but having a Unicernal well I wouldn't design a Unicernal which has which randomly does plug and play with the auto world basically so and one of the applications I've listed here is at the top is a picture viewer which is a Unicernal that also at the moment I think has it as static embedded data in it but it is able on QubesOS or on Unix and SDL to display the images and you can think of some way via network or so to access the images actually so you don't need to compile the images in but you can have a Git repository or a TCP server or whatever in order to receive the images so I am saying so what I didn't mention is that MeetRashra is instead of being general purpose and having a shell and you can do everything with it it is that each service each Unicernal single service thing so you can't do everything with it and I think that is an advantage from a lot of points of view I agree that if you have a highly dynamic system that you may have some trouble on how to integrate that Are there any other questions? Well to be honest in which case thank you again honours warm applause for honours