 I'm going to talk about microkernel related stuff, but I'm going to introduce our project which was inspired, which started with the inspiration of the microkernel stuff. So I'm Hajime from Japan, and I'm going to talk about why we started this project. So as you might know when you are familiar with this, there are plenty of the projects that is trying to emulate or mimic the Linux kernel behavior in the different shape of the software. Like Microsoft invented the drawbridge project, which was started as a library operating system. But they are now forming another software which is called Windows subsystem for Linux, which tried to emulate Linux binary executable on top of the Windows operating system without any hypervisor technologies. And Google also tried to introduce another project which is called the G-Visor. This is kind of a sandbox for the container environment which emulates the Linux kernel feature in the user space, in Go languages. And another academic project like Graphen also offer similar facilities. They provide the Linux compatibility layer in the various operating systems as the drawbridge project has been doing. So they offer the Linux binary compatibility in various operating systems, not only for the Linux, but also Windows, Mac OS, and the BSD staff, BSD family, as well as the different execution environments like Inter-SGX extension. And the final project I mentioned here is NOAA, which is another compatibility layer project running Linux application on Mac OS. So those are very similar. The internals of the implementation design really looks like very similar. But the compatibility that they can achieve is always incomplete. Some of the system calls are missing always. And they result in different directions in order to provide Linux facilities. For example, the Microsoft of the Windows subsystem for Linux is now has a version number two, which they try to shift the architecture from this library operating system based one into the hypervisor based one, which I think they give up to emulate this kind of Linux emulation in this user space implementation. But our motivation in this project in our Linux kernel library project is we don't want to rewrite the Linux kernel twice or three times or forever. Because the Linux kernel is written in C and that your program or our program may also be written in C program. Even though you are using different program languages, you can use the C, you can call the C function from your language runtime. So our motivation is why not reuse this Linux kernel source code in a different shape as a reusable library? So this motivation is quite similar to the guy from the NetPSD kernel, which introduced any kernel architecture in the Monolish kernel. So this slide shows the brief overview of what the LKL or Linux kernel library looks like. So our library, our code will be generating a library, which can be shared or which can be linkable from the program, external programs. So the implementation of the LKL is taking a similar approach as the user model Linux does, which creates a new architecture directly inside the Linux kernel source tree. But this architecture is totally hardware-independent. We are trying to outsource all the hardware-dependent code inside the architecture code into the different ones which are described in the bottom part of this architecture. And by taking this design, the Linux kernel code can be executable in the various environments. So far, we have experienced this Linux kernel code on the Linux user space without any kernel interaction. And we can also use this library on Windows operating system, FreeBSD and the Android user space application. And some of them are playing with this code in the UEFI bootloader. And some of them are also playing with using this portion as a unique kernel. So if you look at this program as a user space network stack, because you can use this library, which contains the TCP implementation inside the library, the socket program can call this library without interacting with the kernel. So there are plenty of the user space network projects, which I listed in part of all of the project. So most of them are trying to achieve the higher high-speed performance with the user space execution because they can eliminate the context switch overhead between the user and the kernel mode interaction. But they usually suffer from the application interfaces because they use their own interface for the application. Some of them provide the project-stayer compatibility for this kind of user space network stack. But it's not always complete. It's not always complete like some of them that lacks the ePoll support for the library. But our goal should be identical to what Linux can do. So both tools should be the same in our motivation. So the internal of the architecture is, as I mentioned, we are trying to create the newly introduced architecture inside the Linux kernel 3 by eliminating hardware or underlying layer dependency inside this architecture. And the outsourced portion of this architecture, like accessing the hardware resources like a clock, memory, or process scheduling, these kind of detailed information should be contained inside this host environment, which can solve all the underlying layer dependency inside this hosting environment. So this part should be very portable inside the Linux kernel 3. And another goal of this project is we don't want to try to modify this gray part, not only the kernel part, but also the application code should be usable as is. So because we decided to provide the API to the application, but this low format of API is not compatible to the standard library implementation. So the first part of the host backend is located under the newly introduced architecture, which tries to unify the interface between across the different environment. So host environment has a different implementation. Like we currently have a project interface and a Windows operating system interface, as well as the bridge implementation of the Ramp HyperCode, which can be expand and support this of the underlying layer. And in order to communicate this library with this external component, we provided the virtual device layer, virtual device implementation inside this host environment. And it exposes as a BAT-IO interface so that the Linux kernel code can use the driver implementation of the BAT-IO. So we have implemented block devices, implementations, as well as the network interface implementation. And we also have experimentally implemented the BAT-IO BAT file system implementation, which can be exposed as a 9p file system to the driver layer. So this can be almost explained in the previous slide, but the second component is a CPU-independent architecture implementation inside the Linux kernel 3. And the third component is application interface, which is located on top of the kernel implementation. So with LKL, it exposes our own system code API, which is called LKL SystemCodes. But this system code is not compatible, as I mentioned before. So we provided the various way to access this interface from the application. So the first API is the LKL SystemCode low API. So if you have an application, and if you want to use LKL, you have to rewrite the system code part of your application by replacing the symbols from the socket to the LKL prefix one. So this API is slightly defined from the typical standard projects API. Because this is an entry point of the Linux kernel. So the error number and the return values are slightly defined from the projects API. So another interface that we provided right now is the so-called hijack library, which is basically based on the dynamic translation on the runtime by LD preload. So if you have a project API application, and you don't want to rewrite the application, or you don't want to rebuild your application, your binary can be translated with this additional library. But it has a limitation that some of the standard library implementation makes some of the symbols invisible from the application side. So you cannot realize such a symbol. So in that case, your application may not work well. So another API is our own standard libc implementation. Right now, we use the muscle libc as a standard library implementation. And we call it this muscle libc as to be able to use the LKL from the user space codes. So I'm going to share you some of the use cases that we use with this Linux kernel library. Because this is library. If nobody uses this library, this software is useless. So I'm going to present our known use cases. But I wish you to have more expanded use cases if you have a nice idea. So the first typical use case is to mount a disk image without root privileges. So some of the operating system has an experimental implementation of the file system in user space. Like if you have an EXT4 file system image for the virtual machine, and if you want to modify these images on the foreign operating system like a Windows operating system, you may have to use such an alternative implementation of the EXT4, which may not cover the complete specification of the EXT4. Or you have to use the virtual machine and boot the Linux OS and mount the disk image and rewrite or modify the contents of the disk images. But you don't have to do such a complicated stuff. You can just use this library as a Windows application and modify it inside your user space program. So you can now modify the beta FS contents of the file system images in the different operating system. So another use case is trying to introduce the kernel feature in the very restricted environment, which you don't have the freedom to decompose the kernel space implementation. So this is an example of the introduction of the multiple TCP implementation on the Android phone. By the way, the MPTCP is already upstream in the previous week and we don't have to do such stuff right now, but this is a snapshot of the two years ago, I guess. And at that time, we don't have much participate support, but we can do it because we have a user space implementation of the Linux card on the Android phone. So another toy implementation is a Unix pipe as a network interface card. So if you have three different or two different programs using LKL, and each of the program has a network stack implementation. And if you write the packets generated by this kernel into the console, and if the next program receives this packet via pipe as a received channel of the network interface card, you can do something, I don't know. So if you want to make an access control by this pipe communication, you can use the grep command in order to filter the specific payload of the packet with the grep command argument. And if you want to duplicate the generated packet into some external program like TCP dump, you can do with the T command by mirroring the packet into the different processes. That's almost it. And another example is trying to convert the Linux kernel code into the JavaScript program. Since last year, I guess, Linux kernel can be built with LLVM, and LLVM can generate the JavaScript code with additional tricks. You can then Linux kernel code without any emulation like JS Linux does. But you can directly invoke the Linux kernel code in the browser. So this is the initialization task of the Linux kernel in the C code. This code will be translated as like this, which is with the... This is automatically generated by the end script on source. And you can run the Linux kernel code on the browser. So another use case is running the Linux binary code on the different operating system. So this demo is actually running on the macOS with the engines implementation with providing these contents of the slides. So we have implemented the OCI runtime implementation, which coordinates the invocation of the Linux kernel library, and try to communicate or interact with the container engine. So we currently only tested with the container D implementation, as well as our own crafted Docker D implementation, which is running on the Darwin without any virtualization. So this is the... This is running Docker demo, and this is not connected to the Linux host. But running inside of this Macbook, natively. And you can run the Docker run command with our own crafted images with engines command. And it is running the Linux boot kernel load with main call of the engines implementation. We actually need an additional configuration, but it's not how do I show... So there's also some folks who is also using LKL in order to test a file system implementation with their own implemented fuzzer. I have not involved this project, so if you are interested in, you can try to look at their paper, which is listed in the slides. So there is another implementation using an approximation, but I'm running out of time, so I'll skip this one. So we also try to expand the underlying layer facility in order to integrate other implementation, like a solo file implementation. We have also started the upstreaming this code into the Linux mainline. So we actually tried to upstream several times in the past. But now we have suggestions from the maintenance, which we should work together with the user-mode Linux. So we are trying to integrate this code into the user-mode Linux implementation as a different execution mode with the library. So that's almost it. So if you have a program and if you can link this library as a link flag, you may, I wish you will have another benefit that you cannot see in the past. Thanks so much, thanks so much for your attention and I'm happy to take a question. Questions? Yes, please. I didn't bring the performance results today, but the characteristics of the performance is almost similar to the user-space network stuff, which we can eliminate user-space kernel interaction. So we will have a benefit of the user-space execution, like what DPDK guys can do. Thank you.