 Anyway, our next speaker will be Sebastian. Thank you. Okay, so today I want to present to you Russ on L4E, integrating the system language into the micropower in Mutherland. And we're basically getting into two different worlds. One is Russ and the other one is micro-colonies. And I expect half of you is into Russ and the other into micro-colonies. Not half by even least, but I am aware that some people are from Russ world, others from the micro-colon world. So I'm going to present you first what Russ does, then what L4E is on the other way around. Then I'm going to tell you what's so hard about integrating this language into the L4E world. And then I'm going to show you what currently works, how it looks like. And at the end I'm going to give you an outlook about what will be done in the future. So first of all, what is L4E? L4E or also called Fiasco, it has many names. It's basically a micro-colonial operating system built on a micro-colonial which is from the L4E family. It is really small itself and it's also called a small, classical building base. So only around 10,000 lines of code, I think, run in a column mode and the rest is all in user space. All the scheduling is in the column. The rest of all the policies is implemented in user space, be it paging, be it other resource management, it's all in user space. This is quite powerful, so obviously that also means drivers are in user space and all the things you expect from a proper micro-colonial system. L4E is capability-based. Capabilities in this context mean that you have access to a certain kernel object from your task bank. Capability is always branded to a record task and this object is local, so you can't just... It's basically numbered, so you index it to this kernel table but you can't just guess them because they're local to your task. If you don't have a capability, you can't do anything. So if you don't have a capability to contact somebody else, you actually don't know that other party exists. Which is quite cool because by default a process is isolated. It can't do anything, but even memory management wouldn't be allowed to. Okay, so much about L4. Ah, yes, I forgot. Obviously with these capabilities can do virtualization of resources and so on and features IPC mechanisms obviously because it's a micro-colonial system and since we're on a more complex micro-colonial with capabilities, there is our C and C++ bindings, though the C++ bindings are the ones we actually want to use because it's sort of basically protocol you would need to do for the IPC mechanisms to work and C++ basically takes a lot of work from you by using C++ templates. You could use C bindings as well, but then you need to do that all by hand. Oh, yes, I forgot to mention that we have pthread libraries on L4 and basic POSIX APIs just for the sake of pthreadness. So why Rust? I mean obviously you have heard of Rust. I have not met anybody who hasn't. Rust is memory safe in most regards, so it does memory management on compile time. The programmer doesn't need to care about when is a verbal free, at least not by the C terms to care about it. You prevent all these bugs like use after free, double free, and so on. It has strong also called fearless concurrency which permits you to write programs without data risks. And you can also to some extent prevent the techniques and so on. All these goodies are basically zero cost, so in the end you don't pay for it. It's all done by the compiler during compile time, so we don't have any garbage collection or anything. All this you get with comparable performance to C++. So actually it's worthwhile exploring this. Let's have a look at the Rust ecosystem and the architecture. First of all, there's a compiler called Rust C. Rust C is quite a normal compiler, basically. What comes on top is Kaggle. Kaggle is the project manager of the Rust project. It has a Kaggle.com, which is a configuration file, specifying dependencies and which files are actually used, and a few more bits. And on top there's Rust Up, which is a component in-store. When you think, okay, why are you talking about the component in-store? That's more advertisement. Actually it isn't because Rust Up not only installs Rust C and Kaggle, but it also manages components, so different sign-up libraries for different targets you want to compile for. Because Rust C itself is a cross-compiler, and you don't need anything special in Rust if you have Rust C on your system. You automatically have a compiler for, let's say, ARM V6 or Spark, whatever. So with Rust Up, you can just tell, please grab me at the start library for our ARM V6 and do so. And this is obviously quite important for our use case, because if you install Rust by default, it won't have the F4 read the start library, but you can use Rust Up to get it. Okay, so integration challenges. First of all, the start library. Most of the start library is using Unix or POSIX API. Though there are, it's fit into many parts, and there's also Windows stuff, but despite Windows, it's actually all POSIX, and now L4 isn't POSIX. It has some compatibility layer, and we are actually using it. So I ordered the bunch of libraries to be listed there. They are all ordered to be used on L4V, but internally it's using the POSIX API. As a side note, I should mention that the start library is basically a facade to all these libraries to be listed there. This is quite powerful because if you decide to not use the start library because you don't want to have as much code in your program, because you have size constraints or whatever, then you can easily just select what you need. For instance, just the LibLock library for allocations. If I mentioned the word create, you can substitute this in this talk with library, just for those who don't know the word create. Another challenge is cargo. BID, which is the build system of L4V, does package management itself. It does dependency tracking, and it's quite convenient, but it's not convenient if you go for language like Rust, which brings its own dependency management tool as well. You might think, okay, just ignore cargo, integrate, the Rust compiler and just hold it, but it's not as easy because the Rust project is built around this cargo configuration. If you want to have conditional compilation, for instance, called a feature in Rust, you actually want to have also a cargo configuration file because otherwise you would need to manually puzzle all these codes to the Rust compiler yourself when you port to L4V. Therefore, you actually want to use cargo because it's all implemented there. You just somehow need to tell cargo, please don't track dependencies, but do a rest for, like, your conditional combination, these features we just mentioned. So this one is still unsolved. And then I talked about IPC. So obviously we could just take, use a crate called bindgen, generate C bindings to be used in Rust, and then we have IPC working. You can do that, but it's cumbersome. What you actually want is leverage the Rust type system to generate all the code you need required to do useful message passing by the compiler during compiler time, so not by hand. And this also is not done yet. Okay, the port and the compilation. Slide back. So you're there at the JSON already. Rust itself, as I mentioned, is a cross compiler. It brings so-called target specifications. Even, okay, we'll do that first. Yeah, brilliant, okay. So you might have seen things you shouldn't see. But since you didn't attend to me, that's not a problem. Target specifications. So Rust needs only a handful of information about a target system, like engineers point it with and so on. And you can specify this as a target specification. These are normally in the compiler itself, but Rust also allows so-called JSON target specifications. And these do, yeah, basically these do specify the same values, but can be passed dynamically because you can modify the JSON file. It's quite for prototyping, what we also did. So we just tried and filled the JSON with values until it worked, which is nice, but it's JSON, it's not validated. And if you pass it to an arbitrary version of Rust, it most probably won't work. Given that Rust is released every six weeks, it's quite probable that your implementation won't work after a while, or not on a specific system with a Rust version. So it's better to integrate this. And the good news, Rust-oriented ships had a 4E target specification. So you look at this slide already as well. Recombination process. I guess you're familiar with that. So you can just involve GCC intelligence to please output an executable in one go. But internally, it will do assembly and will output an object file. And this object file will be linked by the linker. Then you have an executable. And the nice thing now is you can do all these steps by hand. In Rust, it's different. Rust, you can also, at Rust C, you can just invoke and tell, please compile this to an executable that works fine. But ripping that step apart doesn't work. Rust simply insists on calling the linker. I am aware that Rust supports emitting object files. But yeah, it really does only emit an object file. It doesn't provide you with all the information, like an entry point and so on, you could actually need an executable. So this is not really useful. It's only useful for getting an object out of something which doesn't depend on anything else. Which isn't like where it works. Okay, so the conflict. Who does what? The idea. Does normally the linking. The process is really that the executable object files are generated out of C code. And then a custom linker command line is basically auto-generated by the build system. Custom means here we have a custom linker script, we have custom tiered out of startup code and so on. It also does dependency management as I have put it out various times. Since Rust C insists on calling the linker, we have our problem. How can we actually get the binary out without Rust C actually knowing the exact linker command line for L4E? The solution is somewhat problematic. We have a dirt hacker. Because since we haven't integrated cargo properly and we haven't integrated the full command line of L4E's linker location into Rust C, we currently just build a static library out of our program. So we just say Rust C, please make a static library. Is there some problem with the attachment? Okay, so we have a dirt hacker which basically says please compile this piece of code to a static library. Then you have to take care of yourself that the linker will find the entry point which usually boils down to exporting a super edible main function. And this is dirty because you don't have all the niceness of Rust. You don't have the proper dependency management of Rust. You don't have features which are the conditional combination of Rust. But at least you can write Rust and get a binary out of it. Just with this intermediate step of a static library. Okay, so let's get to a bit more. You see makefile now, I hope. And first, two lines are just basically telling the build system where files are. So not too important. Then there's a normal target specification which just tells make how to call the binary. And then there's this nice source underscore RS variable. And this one does most of the managing. It tells BID this is a Rust file. Please handle it differently. And behind that is all the source files of the application we want to build. It is quite interesting because Rust C, the compiler, only needs the first file to figure out which other files are part of the project. So in the C program, we would need to list all the C files in the project. Here we only need the first, which is called main RS, originally. Okay, so Rust and normal main function. It's just f in main, just a print, just a count, as you would expect it. For those who are not familiar with Rust, print n is a macro. Yeah, it works roughly like the print f in C, just more type safe and with braces instead of percent. But yeah, what we actually have to do looks like this. First of all, we have to instruct the compiler to not mangle the name of the function, which obviously needs to be done because the type system of Rust is slightly more advanced than it seems. Then we have to tell it to please use C calling conventions by saying storm C. And then in the function definition, the declaration actually, you see the C types we have to use. Rust has different types and so we have to use the C types at this place. But the rest looks the same. It didn't look half a year ago, but now it looks the same as the slide before because the standard library is now ported. Okay, this is what works at the moment. Let's have a look at what needs to be done. So, obviously, the linking step with the standard library in between needs to go. For this to work, we have to somehow trigger cargo to integrate nicely into BID and BID to be nice to cargo. Then we should figure out a really efficient way to handle IPC. I mentioned earlier that C++ does most of the magic which is to require IPC to be there for. This is basically passing the object you want to send, for instance, as a template parameter, the compiler will take care of generating the code for you. Rust can do similar things with drop-knock rows and so on, and it would be really great to have the same convenience in Rust. So, despite using BindGen for these simple C bindings, we should really invest more time into your proposed IPC bindings. And that's it. So, I'm hoping to have questions now. So, the Rocky Gallia project for running a Rust Micro-Colonel user space around the SCL port Okay, so the question is whether it's related to the SCL for them to rewrite basically the user in Rust or Rocky Gallia. And no, it's not. The thing is that L4 is just a family of Micro-Colonels. So, even though they have roughly familiar semantics, they are completely different. So, Rocky Gallia has a project which tries to rewrite for this particular Micro-Colonel and userland, whereas we try to just get efficient bindings on top of all the SCL userland which exists. Have you tried writing on the Rust side programs using capabilities? Do you have some experience of what check capability looks like in Rust? Because that's what you get to know in Rust. Okay, so the question is whether I have used Rust to write programs using capabilities. Was that all? No, because I haven't seen any project which runs on a Micro-Colonel with capabilities yet. So, for instance, I looked at Redox, but I haven't found any word about capabilities there, so I'm open to any pointers. Do you think that the cargo project could benefit from trying to allow you to control how it links? So, the question is whether cargo could benefit from giving more freedom away to make the linker process more more custom, is that what you're asking? So, I would say so. Other boot systems have similar issues. There's actually a tracking issue for this on GitHub, and it's quite large. Because a lot of embedded people also wanted to do custom linking. I personally would think, yes, it would benefit cargo, but I don't know where to start to report problems there. So, yeah, but I think it would be beneficial in general, yes, but for our niche case. Okay, thank you.