 It's too dark. So I welcome Roblin from ARM, and he was, for me, a very interesting talk about ARM V8. So they put a redox which is based on written in rust to the ARM 64-bit CPU architecture. So welcome, Roblin. Thank you. Thanks. So this is my first false term. I've never been for one before. And I must say, I'm really, really impressed. And I'm going to be a regular now, just so you know, right? Thanks for coming. I'm overwhelmed with this response. I thought I'll just come there with this funky new language, and there'll be two people in the room who'll be interested. But it turns out a lot if you are. OK, so the title? Louder than this? It's going to be really hard. I will try. I'm sorry, I'm very soft-spoken. So a microkernel written in rust is the title. Sometime back, I'll explain the background to this. Sometime back, a very senior processor architect at ARM whispered in my ear about rust. And in my spare time, I've been playing with rust. Everything over here is basically a bit of an experiment to scratch in which I have. So porting the Unix-like redox operating system to ARM V8.0. I want to talk about redox, this nice new microkernel based stack. And I want to talk about how I got it going on ARM. I should point out, though, as I was telling Sebastian, I think it was, I'm absolutely floored by the level of detail that the G-Node guys put in their slides. I've been watching their work for a while now. This is nowhere close, disclaimer. So I have to talk about redox on ARM, but redox is written in rust, which is a fairly new language. It's a fairly, I would say it's fairly controversial as well. And I might come to some of that later. But in order to explain an operating system stack written in a new language, I should introduce the language. Now that's hard, because you can turn that into a tutorial for the language, and I don't have time for that. So I've cherry-picked only those things which, in my subjective opinion, were interesting things in rust for doing microkernels. So I will discuss rust as well. So the goals are, I primarily will talk about rust, very lightweight intro, some of the unique features. I'll explain why at ARM we think it's interesting. And I'll try and speak a little bit about what has been done in the rust language for the ARM architecture. And then I'll get into redox. But all along, I want to keep it a little informal, because microkernel language, I want to talk about some interesting anecdotes, paradoxes that I see in the space that I work in. I'll share that. So I work at Cambridge for ARM in the open source software division. It's one of the biggest divisions, as it turns out. I work specifically in the system software architecture team. We are a bunch of four people who basically are responsible to try and chart out the right strategy in different ways. And I could explain that, but you'll have to buy me beer. But you look at firmware, kernel, middleware platform, all that kind of stuff. But it's all about software, really. It's kind of an interface between the software realm and the hardware guys. My specific responsibility is the so-called safety track, where we want to promote the uptake of ARM IP in safety critical domains using open source software as a medium. This comes as a shock to many people. It came as a shock to me, because what the hell? Open source software in a safety critical automotive environment? Are you mad? It turns out that it's there. A lot of open source software is used, but it's not used for productization necessarily. Anybody in the business of silicon design or system design in the automotive space? I'm going to use that as a punching bag, the automotive ecosystem. They're all using open source software for silicon bring up, research and development, and yada, yada, yada. Up until the point, they want somebody on the hook for the product. That's when they remove the open source stuff, and they go to proprietary service vendors. It's a very well-established thing. So areas of interest. All of this, I like this is my own personal list. I like systems programming languages. Being in Cambridge, having access to the Cambridge University is a bonus. I get to speak to people over there on interesting topics. The university works with ARM on ARM architecture extensions. I'm particularly interested in ARM architecture extensions that are for safety and security. System design, how the architecture is turned into micro architecture and stitched together into a platform. I'm involved in those things. Software standards for ARM. It took a while for ARM to realize the value of doing open source. When I joined ARM, there was like two and a half people doing open source software. Now there are lots of people doing open source software. The reason is that we recognize the value of trying to use open source as a forcing function to standardize software. Because ARM's biggest strength is also its biggest weakness. Strength being flexibility. License the IP. Do what you want with it. That's a nightmare for software. So having standards is important, and I interrupt with those guys. Open source communities. My primary charter is open source. There are lots of people here whom I've met, who are my friends in this room, actually, with whom I have tried to find out whether there's an opportunity for their particular style of doing a microkernel or some related technology is optimized for ARM or not. How can we help? Do they have a community? You don't have a community? Come and attend the Linaro Connect. I run a safety track there. We can introduce you to people, that kind of stuff. And some people have actually benefited from that. So all of this goes towards my primary focus area, which is safe data fusion and perception. Everybody who's in the autonomous space, whether it's for automotives or anything that requires perception is in the business of trying to come up with system hardware that is generic and applicable for data fusion and perception in different markets with some parameterization on top. And it is a really hard problem to make safe, right? This is a very, very simplified overview of the kind of systems that we reason about at ARM. You have sensor blocks with lots of arrays of sensors. You have an IO concentrator that tries to fuse as much as it can together. The data format for all the sensor information is standardized using algorithms. I can go into detail if you like. But anyway, all of that is done so that you can build sufficiently comprehensive environmental model of the immediate vicinity of whatever it is that you're trying to make capable of perception. You have general purpose compute clusters with processors with insanely high frequency of operation and very, very sophisticated micro-architecture implementations. That's fine. I have that effect on people. I'm sorry. Yeah, so you have an inference block that is basically a fancy term for a subsystem that has the ability to run pre-trained neural network models which have been trained to detect lanes and signs and pedestrians and all of that stuff. And you have a synergy between the general purpose compute thing and the inference block to try and come up with a particular goal expression that needs to be solved. Like, okay, someone pressed the brake, right? You have to decide what to do next, right? So there's a goal solving aspect of this. And then all of this goes out to these mechatronic interfaces and onto the actuators. Now, how do you make this safe? There are lots of weaknesses in these links in terms of unpredictable operation, in terms of inability to meet deadlines, in terms of neural networks being generally very hard to put constraints on, in terms of a priori execution times and things like that, right? So that's all the stuff. And micro kernels actually find use in the general purpose compute site and interestingly now on the inference side as well. Lots more detail there. So the processor architect said, hey, you should check out Rust, right? And when a processor architect at ARM tells you you should check something out, that usually means they want you to actually do a rigorous analysis of it. So I kind of, I wouldn't say I did a rigorous enough analysis, but I said, okay, I'll look at this a little more seriously in my spare time. I wanted something that allowed me to explore micro kernel based system software composition because that is the most prominent design pattern in the safety space when it comes to operating system architecture, right? All the proprietary operating systems that are actually popular with vendors are micro kernel based ones. I definitely want to look at the ARM architecture aspect of the system design. So we've added new instruction sets. We've added new capability into the instruction set architecture to support safe operation. There's memory tagging, there's pointer authentication, lots of stuff going in over there. I needed to reason about that. And of course I've always wanted to look at a safety themed systems programming language and here was my opportunity. So I said, all right, how do I blend all of this together? I started writing my own micro kernel because I'm a huge fan of the ACL for stuff. I've been following it for a long time. I know Gerno personally. But a new language and it became a little daunting to try and mix all of this together. So I said, okay, has anyone else tried to do a micro kernel? And it turns out that somebody had and that was Redox and I said, I'll use that as a bit of a scratching and itch exercise, right? So before we get there, here is a paradox. This paradox is not about REST or Redox. It's just a general paradox about some of the stuff that I look at, right? You often have completely orthogonal requirements based on hardware and software, right? Very, very quickly. This is like a very notional trend, trend system of the kind of compute in terms of non, nominal peak single threaded compute required in automotives over the years, right? I'm not putting anything absolute over here because people read too much into any slide I put in an open audience with my arm t-shirt here, right? So this is just notional stuff. You had break control, power train, fuel injection. These are what I call the traditional things in the car came about in the 80s. Microcontroller class scores, all everything really well understood in terms of worst case execution, yada, yada, yada. Interesting tricks like redundant execution used to get additional confidence. Then you had IVI where there was a big spurt in computer requirement because people wanted to watch movies and play games and all of that. That was great for ARM, Cortex-A processors are kind of dominating in that space. And then in the mid 2000s, everyone went absolutely nuts with this autonomous thing. And basically the compute requirement was just going nuts, okay? We haven't seen any trend where it's converging. It just seems to be going higher and higher and higher. Why am I telling you this? Because it raises some interesting paradoxes, right? So if you look at the degree of criticality of these classes of software, of course brake control, fuel injection, very high degree of criticality. We need the highest assurance. Give us your certificates of assessment. In vehicle entertainment, nah, you can run Linux or something on it. I'm cool with that, right? I don't care so much about the response time. Autonomous control, nobody says this explicitly, but actually thinking about it, it has an exceptionally high criticality requirement as well. Fine. However, there is a linear trend between the sensitivity to deterministic execution and the degree of criticality. This is again kind of an unwritten rule. If you have something that's very critical, it must have an a priori known worst case execution time and all of that should be analyzed to death and you should give me proof that you are working within those. Reaction time, right? So as you're becoming more and more performant, your ability to react to an asynchronous event in a bounded fashion is becoming really hard to do. Because to give you that performance, processors are becoming multi-order, out of issue, speculative characteristics, complex cash hierarchies, translation regimes. It's becoming really hard to come up with the upper bound, right? So, autonomous control has very high requirements. It has very high criticality requirements and performance requirements, but high criticality requires deterministic execution, but if you're increasing the professor's performance, the slower is its reaction time, right? This is a paradox and people are going nuts trying to solve it, right? They're making progress there. And you know about the thin line between safety and security. This is just taken from CVE details. I think this is a trend line and a pie chart showing that some of the most common causes of security and I would say safety related problems are to do with overflows, use after freeze, privilege escalation, all of that stuff, right? Which kind of fits very well with the theme of the stock, right? But I spend a lot of time just screaming silently in a room with padded walls because I'm uncomfortable with more complexity when the question is about a person's life, right? But yeah, something needs to give. Autonomous functions started becoming into the limelight because of cars, but now they're going into toys, drones, robots, industrial assembly lines, take your pick. Hardware engineers are trying very hard to make this sensibly safe. The software complexity is not going to come down. It's going higher and higher, right? So anything we can do to make the software safer is welcome. What have we done? Mixed criticality hardware and software design. Let's chop up the design into bits that are very critical, bits that are less critical and come up with some guarantees of those boundaries. Do traditional quality management software. Show me your specification, show me your design. Have you written the test? Where is your test report, yada, yada, yada? Or, oh, you know what? C is ambiguous. We'll give you a specification called MISRA that kind of removes the ambiguity. Now you have to record everything you wrote using this and show me that it's all compiling correctly. I don't like it. I hate it, but it's there in the industry. And then there's formal verification of hardware and software, which in my opinion is the gold standard, right? Machine checkable proofs of implementation correctness of the hardware and the software, both. I really like what the ACL4 guys are doing. But as I've told them on several occasions, this is up against the wall. Formal verification, unless it gets commoditized to the extent that your friend who wrote JavaScript to add this funky spinner on his webpage, unless he can use formal verification without knowing what formal verification is, people are not going to adopt it, right? So I'm a huge advocate of formal verification and I'm there for the ACL4 guys as they know. But I worry that unless they solve the commoditization problem, it's going to be an issue. So then why don't we look at a language, right? There's a new language that's disruptive, but it's got safety properties. Let's check it out. I don't want to see this. My vision of cute furry cats in safe cars, you know, just very sweet, turn into this, right? What's up? I'm not just a car. I'm actually a killing machine, right? And you can program me remotely and influence my neural network to actually pick out the particular people I want to assassinate, right? So let's try not get there. Finally, Rust. This is the URL of the Rust language website. That's the logo. And that's Hello World. I have a couple of quotes, right? It's a great way of introducing something nice and controversial, right? Just cherry-pick some quotes from the internet, usually from Reddit, and put them over here. But some of them are actually interesting, right? So Rust is like doing Parker while suspended on strings and wearing protective gear. Yes, sometimes it'll look really ridiculous, but you'll be able to do all sorts of cool moves without hurting yourself. Gold, I need to send this guy a message and say, dude, that was awesome. The Rust book introduction has a very profound statement, which I initially took as marketing speak, but in hindsight, I think it makes a lot of sense. It wasn't always so clear, but the Rust programming language is fundamentally about empowerment. No matter what kind of code you're writing now, Rust empowers you to reach farther, to program with confidence in a wider variety of domains than you did before. I stand by this, by the way, okay? So Rust is very expressive. I often use Rust instead of Python or Ruby. This is by a gentleman called me, that's me, right? So it is expressive. I'm using it to do stuff that I would do with bash scripts or Python or Ruby, right? I just like the syntax. It's clean, it's easy to read, it's quite intuitive. Rust's expressiveness is great for making complex system software concepts accessible, this is again my comment. The snippet here is a page table walk example from Redox in the generic part of the code. Just read this, okay? And then try and look at similar code in Linux or FreeBSD or anything else. This is beautiful, it's prose, right? I want to get a reference to a particular fourth level page table. If something's missing there, please create the table right there at the index I care about and fill in the hierarchy for my translation table walk. Easy to read, gets the idea across immediately. Very subjective, but I think it's really, really expressive. Performance, right? So everyone's got to be in their bonnet about Rust versus C and C++ and Go. I, in my lab, intend doing a far more comprehensive benchmark. For the moment, what I present before you is the very controversial benchmarks game, which is a suite of synthetic toy programs written by a large community that tries and scores programs, programming languages. So what I do find is that the comment made in the Rust literature, which is like the performance of machine code generated from idiomatic Rust is typically at par or better than machine code generated from idiomatic C++ is often true, okay? These are, you can't read this clearly, but you'll get the slides. The names of the programs they have, there's a nucleotide folding kind of algorithms and a bunch of others and then they've scored all of these languages. All I'm trying to say here is, there is a case to be made for Rust's performance. You know, it can't be ignored. Now, I'm not going to necessarily name specific projects, but being at ARM, I get to speak to people across the ecosystem on the topic of safety and on the topic increasingly of Rust. I think everyone's using it. Some people are using it as a way to locate hotspots in their code in terms of performance and then replace those hotspots with a library they've written in Rust, right? So that's the safer path where they start getting performance and scale all of that stuff without adopting Rust the language. Other people are saying, we're going to use Rust straight off. Amazon is a great example of that. So now some random statements, which I can back up. You can't forget to explicitly initialize variables with Rust. The compiler will prevent that. It won't allow that. You can't overflow an array. The compiler won't check that at compile time, but at runtime you will get a panic consistently as opposed to C or C++ where you may or may not have something bad happen, right? Which is a feature. I love C. My career was based on C and C++, but facts are facts. You can't forget to free memory allocated on the heap. No way. If shared data is protected by lock, you have to take that lock because that locking scheme owns the data. You can't say I use a wrapper in a library which had an API asking me to take this lock and I forgot or something, right? It won't compile. You can't have a dangling pointer. A double free is not possible. Use after free is not possible. Generally speaking, there is no undefined behavior and they put a lot of emphasis on making sure it stays that way. Excuse me. So what's the big deal? The big deal is it's all checked at compile time, which is huge. So it's actually a combination of two languages. Let's turn up the controversy a little bit. There's safe Rust and there's unsafe Rust. The moment I save this to somebody who's a bit of a skeptic, and by the way, in the first time, I've had two or three instances of heated debates on this one. It's unsafe. That's it. I'm not using it. It's unsafe. Don't talk to me now. It's supposed to be safe. It's unsafe. I'm not talking to you. That's not the point. The point is, with C and C++, the whole paradigm is unsafe by design. I should be able to create a buffer populated with random numbers, get a function pointer to the start of that buffer and call that junk as code. Because I need that for a whole class of problems and C and C++ gives me that. You can't do that in safe Rust for very good reason. However, if you do need to use that kind of shenanigan, you have an escape hatch called unsafe Rust. The cool thing is that you are forced to annotate all the code where you want to do this. So static analyzers explicitly know where the unsafe code is. Nine times out of 10, and it's probably 10 out of 10. I'm just saying nine out of 10 because I'm not 100 percent sure. If code compiles with Rust, the Rust compiler, it's correct. It doesn't have any of these problems. If you're writing an operating system, like I encountered this with the Redox where I was doing context switching and I had problems with some assumptions I made with register frames or whatever. Yes, there was a bomb out. I knew where to look because the scope of the search space had been reduced to just the bits that were tagged with the unsafe code. I think that's superb. I think that's great that there are unsafe sections in Rust, right? Yeah, I think I said all of this. So it's not an interpreted language. A lot of people don't know this. It compiles to native machine code. There's no garbage collector. There's no associated runtime non-determinism which becomes an issue in other areas. There's a scope-based scheme where allocation and deallocation is checked at compile time. So all the types of all variables have to be known at compile time. The compiler will not allow you to progress otherwise. But the compiler can do type inference which is handy if you're doing a lot of code and you don't want to repeat the types again and again. So that's a modern feature. A lot of modern languages have automatic type inference as well, right? So, and here's the really cool one, right? So as opposed to C, C++ if there's an error in the return value of a function, you can be a really diligent and smart programmer and check and do follow-on actions. Often enough, like me, you will forget because you're a lazy kid. But if you're using Rust, you have to acknowledge the fact that this function can return multiple return values and you have to acknowledge that you mean to perform a follow-up action. You can't ignore it. That automatically takes away a whole class of problems, right? You don't have exception handling in Rust because you have this separation of recoverable and non-recoverable errors and there is a very nice trick involving generics and enumerators that is used to encapsulate potential problems. Like if there is a result, is it okay or is it not okay? There's no null pointer check to see whether the right thing happened. You have a clean pattern matching functional language paradigm to actually look at the return of some operation and check whether the right thing happened or not. That's really cool. And you have panic if there's a problem and you get back traces and all of the funky stuff. There's another one that people don't realize the value of, okay, so all data is immutable by design in Rust. If you want to modify data, you have to declare the data to be mutable. It's immutable by design. Think about it, right? This is the root of a lot of problems in software written at scale that is being run concurrently and being worked upon by dozens of hundreds of programmers, right? So if you have data that's immutable by default, you have to think about when you want to modify the data, right? That just suddenly makes the whole LAN thing a little safer. There is no numerical type width ambiguity. I have been personally bitten in CNC++ where I assume that the integer size was the word size on my architecture, which is great, but the architecture had 32 bits for word size on one, implementation and 64 bits on the other. Simple, right? Just encode the type widths in the names of the types, right? This is what we end up doing with stood int and what have you in CNC++. It's just a part of the language specification. I think that makes it very clear. You don't have classes, but you have composite types like structure enumerators. If you want behavior for composite types like you have in classes in C++, you have the ability to implement that using a concept of traits, which are interfaces. So you kind of get like the kind of, the design pattern of C++ for doing, for composable software design, but without all of the arguably very tedious stuff that C++ has, right? You have generics. A systems programming language with generics is quite unheard of in my opinion. I'm happy to be proven wrong. Just makes the duplication of code a real possibility, right? You have atomics. So I can write a synchronization primitive and I can say that it is contingent upon one particular variable that is of an atomic type. And if I say the specific kind of memory consistency I want to associate with that atomic, I can do that in the language itself rather than relying on inline assembly and arcane instruction selection for a particular processor. And you have everything you need to build synchronization primitives of complex types on top. So this is the bit that is really hard to explain without whiteboarding and without time. But all of this is possible. The compile type checking of memory safety is possible because of a set of rules which are called ownership rules and the Rust compiler enforces those rules for you. What it means is that if you have some data and you hand the data over to another scope, you can't access that data anymore because the ownership has changed to the scope that you called. If you still want to pass data around, you have to be very clear about how many people are there who can likely mutate this data. You can have references to shared data, but everyone must use an immutable reference if there are multiple people reading that data. If there is even one person who wants to mutate it, you can't have immutable references and immutable references together. This is probably the biggest hurdle in my opinion to actually cross when you're taking Rust on, right? But it's not hard. Excellent support for threads, it's all built in. I love the functional patterns like iterators, closures, generators, those kind of things in a systems language, right? I mean, this is not Python, this is not, you know. So it has a very standard library. You don't have to rely on standard template libraries or custom vendor libraries. All of this stuff is maintained by the community and it's rigorously tested and performance analyzed. You have iterators, generators, closures. You often do feel like you're doing Ruby programming with this because it's influenced by a lot of other languages. First class support for writing tests. You have decorators for methods. They become tests automatically. The tools do it for you. If you want to do documentation generation, you have classy support, you just type this one command and it opens up a browser with a rendered HTML page with sequences of text taken from your code. It's got a really good foreign function interface. So you want to interop with C. You want C to interop with Rust. Very good support for that. Rust up is like this one tool. I think like with C, C++ or any language, right? You'll have a root to get similar functionality where you want like a Ruby gems or PyPy like a simulation of crates from other people, et cetera, et cetera. Rust just makes this part of the language, right? You have a tool which you can use to compose lots of stuff, et cetera, et cetera. How much time have I got? I think I need to rush. 17 minutes. 17? Lots of time. Chill out. So yeah, installing Rust is painless. You have this Rust up tool. You download it using curl, run it. It'll create a new sys root for you and install Rust. Everything is ready. There's really good support for switching your cross compilation target. That was really handy for me. If you're, it's bound to LLVM and C lang, I should have mentioned that. If LLVM supports a target backend, then chances are that you can just use cross compilers for that using just Rust up. You just say Rust up, install this target, and then when you compile the code you're compiling, it'll generate code for that particular target, which is really easy to do, right? If you want to override what LLVM's default assumptions are for a particular set of compiler flags, you can create a JSON file and provide overrides and Rust up will just look at that instead. There's this thing called cargo, which is a package manager, and this is what I meant like things like Ruby James, et cetera. It's very easy to use semantically versioned lists of what are known as crates, which are like libraries, modules, that kind of stuff. And you can be guaranteed that if you pass somebody a specification file for cargo for a particular bit of software you've written, they will be able to recreate that right down to the exact versions of all the crates involved without you having to intervene at all. Golden. There's a central package repository called crates.io, and there's a lot of work put into trying to reduce compilation times. There's lots more work remaining over there. This is the sequence I chose to learn Rust. I'm just mentioning it here in case it helps others. There is a really nice book. The community has written called the Rust book. It's available in print form. It's also available on the internet. I read that first, then I read it again because the first time I kept interrupting myself. So the second time I had a cool reading. Rust, by example, is really good if you just want to go straight to, I have a specific problem. Tell me how to solve it. The Rust Nomicon provides a lot of detail, some of the internals of how unsafe works in reality. And then there's the Rust reference, right? Interestingly, there isn't any Rust language specification yet, but there's a working group being created to attack that problem, and I'll revisit that if I have time. So was Rust genuinely useful for implementing a microkernel? I think I can say yes. I particularly like the benefit of unsafe, right? It helped me localize the tricky bits of code. The expressiveness, trying to explain a complex bit of system software was, in my opinion, easier. Interop with ASM code was a breeze. GCC used to have this naked attribute to remove that. It allows you to write functions without epilogues, and that's often very useful. And I appreciated the fact that I can just use a naked decorator, and it just worked. Synchronization code was easy to do because of the memory model consistency specification, and the module subsystem was great. Basically, the kernel in Redox is a library module called kernel, and you can write an application that links against it, and all of that is hidden by the tooling for you. So what's next for Rust and ARM before I finally get to Redox? Thank you. I'm the rep for ARM in the Cortex-A Embedded Working Group, and we are trying to do some bare metal crates over here that allow people to do bare metal rust programming for Cortex-A designs. So if you want to write a bootloader or a secure monitor or something like that, you can just use this crate, and it has abstractions for all of the instructions that architecture and the system control bits. There's a very nice, very active Cortex-M Embedded Working Group, which has been there for a while, and they've done something similar for Cortex-M. The Rust Language Specification Working Group, I want to try and explore ways in which we can help the Rust community come up with, I won't say ISO-like, but more, and I won't even say formal specification because that means something different. But what I want is, I want to go to a compiler guy, and there are compiler guys in ARM I go to, and when I tell them about Rust, they say show me the language specification, and there isn't a language specification. There's a psychological ripple effect that kind of situation has. And then there's a Rust Belt project that I want to involve myself with if I can. That's a formal verification of an intermediary language representation that Rust has called MIR. And if they succeed in what they're doing, you will have enhanced confidence in the correctness of Rust. Finally, okay, Redox. I hope you appreciate that I needed to seed your mind with that literature before I can come to this, right? This is Redox. It's not just a command line thing. There's a complete suite of applications, including shells, including a POSIX compliant C library, a windowing toolkit, frame buffer drivers, yada, yada, yada. This is what it is. It's got a simple browser that often doesn't work, but they're working, they're trying to fix that, editors, all of that stuff, right? So it's got a shell called Ion that was written by the community for Linux, but it runs under Redox. MIT licensed Rust microkernel, reduced set of UNIX system calls, implements as much as it can in Rust. There's a really nice C library written in Rust, yet another one, it's called Relib C, sorry, yet another C library, the first one I believe written in Rust called Relib C. And I don't know if I have a slide about that, I'll come to that later. Why do they call it Rust? Because oxidation Rust and oxidation involves this chemical reaction called Redox. That's why they call it Redox, and Redox kind of sounds like UNIX, okay? So that's the theory. If you try really hard. So the aims were to leverage Rust, use idiomatic Rust to make complex system concepts easier for lay programmers, improve the scope of people coming into the project, leverage existing software, basically the idea here is, just rebuild your code against this other C library and it should run here, okay? And then cover a wide range of target domains. The guy who wrote it, I'll just come to that, his focus was desktop, because I started supporting this with a few patches, they want to go down the embedded way and then the long-term goal is target servers. Will they get there? Time will tell. So it's written by a guy called Jeremy Solo, he's become a friend. He wanted to learn how computers work, but he wrote a lot of assembly code and then basically started getting fed up of the problems there. Discovered Rust and kept writing incremental Rust code and in the end, shared it with a friend who put it on Reddit and after that there was no looking back because that's what happens with Reddit, right? And there's been serious development ever since, so there's a EFI OS loader for Exit 664 at present, there's a C library I mentioned, the library has been thread support, they've written a simple file system called Redox FS, there's a small but growing driver library and there's a pretty significantly growing list of applications actually, somebody or the other, somebody did a scum VM port the other day and played old games, it was awesome. So Google supported them in 2017 and made Redox self hosting, in 2018 they didn't, but people had started getting interested on Patreon and they gave Jeremy enough cash for him to create a Redox summer of code instead, so he said, why not? So a lot of things happened over there, that's roughly when I got involved and I was like, okay, give me a student and I'll try and help them. These are just screenshots of some of the packages that are available, some of the drivers that exist. Every week there's something new, this is the stack, it's a typical design where you have the kernel doing very little and most of the stuff in terms of resource management being done in user space, right? I won't go into details over here, I suppose the unique thing is that they're inspired by Plan 9, Plan 9's everything is a file philosophy, but they are doing everything as a URL, which is interesting. So you, there are some cases where you kind of have some interesting outcomes, right, so you don't have the semantic recursion where you have a device node on your file system at slash dev slash SDA, which represent another indirection into the file system where you could perhaps have slash dev with another device node, yeah, yeah, yeah, it just keeps it clean, right? You have a fully qualified URL and you don't have these oddities about special files and you know, slash dev slash null is supposed to indicate null, but what is the size of this file? I mean, we know what the answer is, but it's just cleaner. So you have URLs and this is wrong, the USB one, but just assume it's correct for the moment, basically you have these fully qualified URLs to actually access services and this is a protocol that's used between elements of the file system, user space, kernel, different execution contexts. They're called schemes, very easy to write actually. Written in Rust, you have primitives for all of this stuff. You have an interesting containerization scheme where basically you have this null namespace and you can find, at a fine grained system call level, socket level, restrict capabilities you want a particular process to have, so that's their capability model. There is SMP support, but the scheduling algorithms are very simple right now and that's probably a good thing. You'll see. I wanted to do a line of code thing, so there's a utility called LOC, it's written in Rust. You should check it out. You point it at a code base, it does some inference and spits out some stats, but basically the upshot of this is that the kernel is roughly in the region of 8,000 to 9,000 lines of code, give or take, right? Virtualization and Redux, there is no support for virtualization at present. I don't think there will be support, but we'll see. I think what the community wants to orient around is the philosophy that you should just rebuild your software against relib C if you want to run it and that's the virtualization play rather than supporting running unmodified software. A lot of people are okay with that, right? If you're going to the overhead of working with a fully new language, you probably are okay with this rebuilding software at least. Relib C, POSIX compliant, uses this tool called C-bin-gen for foreign function interfacing with C code. Target's Redux and Linux. There's a new project called RINE that enables running Redux applications under Linux using some tricks with Relib C. It's API compatible with the Linux system call set. And for a given architecture, obviously it's ABI compatible as well. And this is what makes it possible to run most programs that have been known to run under Linux or the BSTs without too much pain on Redux. The Rust compiler is built for this particular triplet and that's then associated with Relib C and that's how the tool chain starts supporting all of this stuff. Now, I don't know how I'm going to do this super quick, so I'll try. This is basically a list of everything I did to get the port happen, right? So I identified what I was after in terms of which particular architecture number and execution state to target. I wanted to keep things simple. I chose QEMU's virtual machine emulation for AR64 as a platform target, that's the configuration. I wrote down a scope and put it on the Redux GitLab and waited for people to tell me something. No one did, so I just went ahead anyways. I started speaking with the ARM guys because at ARM, like with most silicon manufacturers, you have to get permissions to do this kind of stuff. Started playing with Rust, the compiler, which is Rust and LLVM and looked at what was done to add support for one particular triplet. Then said, okay, I'll write something similar for AR64, unknown Redux, rinse, repeat until I got what looked like ARM assembly. Run into trouble with thread local support because I don't think anyone survives thread local support the first time in any operating system implementation. It's a pain in the neither regions. You have separate instructions for doing TLS accesses that two separate exception levels and ARM, one for the kernel, one for the user space, but LLVM would only ever generate code for user space even for TLS code that I was compiling on the kernel side because it was using the wrong instructions. So I had to modify LLVM, which was not bad actually. And then I came up with a debug flow involving GDB and GDB is super trace stuff. There's a guy at ARM whose GDB take back what I said. There's a guy at ARM who's a QEMU maintainer and with his help, I figured out how to make GDB and QEMU give me some really good stats for debugging and tracing. I created a boot flow with Uboot because I wanted to try and stay as close to the experience people would have on an embedded target with QEMU. And I use Ethernet and this TFTP emulation and I use FTT to transfer environmental information from the boot environment to the kernel. I replicated the exit 64 kernel code structure, stopped everything out, got a linker script done up, got a linkable kernel image that wouldn't run, verified that execution of reaching the kernel, started writing early in that code, did lots of MMU song and dance. In the end, managed to jump to Rust code, fleshed out recursive paging implementation for AR64. This was fun. It's a trick used by on typically on exit 664 to have the MMU help you when you want to do page table updates rather than you walking page table hierarchies. But it involves tricks with selection of virtual addresses and the way in which you program MMUs. And nobody had done it for ARM v8 and I spoke to some of my friends in the ARM kernel team. They said, it's probably possible, try it out. I tried it, it kind of works, but it's fragile. I will probably replace this with proper linear paging at some point. Yeah, so mapping some MMU, yeah, yeah, yeah, yeah. I mapped in a diagnostic UART and got Hello World and I was quite pleased with myself. And then just random selection of bare driver support. All the while, I was kind of making sure GDB is kind of working with simple tests. Stack frame unwinding, no symbol support, but at least I have a stack trace and that's helpful. I did AR64 support to Relib C for the system calls for all of the interop with the rest of the kernel. Context saving store. And I got in it, the user space program to build, tried to run it, failed, tried hard to figure out what was going on, issues with elf parsing. Eventually it said hello, which was good. Then I fleshed out supporting system calls and got some optimizations there. Lots more work required here, got in it scripts going. Context switching code, rinse repeat, everything's fantastic. Interop controller, timers, scheduler hooks, FTT drivers, changing all of the raw drivers I had written to use FTT. So it can be a little more dynamic. And then started using LiveDisk. FTT is the flattened device tree. It's a way of abstracting away the kernel from platform-specific things like interop numbers or speak to me afterwards, I'll tell you. And then I simplified the LiveDisk support. So you basically have this structure that's easy to build and run and does not involve using a disk controller just yet because I haven't finished that. And then login shells worked. I got KT going and Ion without too much trouble and then everything just worked. I added some CPU identification stuff and then I drank a lot of beer because I like beer. So current status, there's a clean room exercise underway because I've broken it as of last night. Code is continually being checked into AR64 branches of various repositories which are in the Redox Github, GitLab, there's a documentation rewrite. We have some people working on silicon. It's taking time, but it will happen. This is stuff that we want to do in 2019. I won't go into the details, read it, ask me if you have questions and then, sorry, these are the generic Redox things and these are things I want to do for ARM if I have the time. These are details about the Redox community. They follow the Rust code of conduct and they stick to it. It's actually a very pleasant community to work with, I should add. The guy who's done it boots it on literally dozens of laptop families because he works for System76. They make Linux laptops, right? So he uses all of those as his test target. There's a guy from my team called Kastin Heitzler. Is he here? He's not here. So he told me that the only way to check, sorry, and this is not AR64 and it might crash, but just hold on. He said the only way to check whether operating system is truly complete is if it runs doomed. It runs doomed. So this is a rebuild of free-dome or PR-dome against relibsy and it's working. So I don't know where Redox will go. Frankly, I'd love to see it go places. It's given me a foundation for some of the architectural exploration I wanted to do, which I think is cool. But I think it's made a lot of progress in a very short amount of time compared to a lot of other microkernel stories. There's something to be learned from that, okay? That's it.