 All right. Thank you. Thank you, Candice. Thank you, Shua. Thanks to the links foundation for hosting me here. It's a pleasure. So let's get started. As I understand, my name is Watson. I'm talking to you at Google and I'm here today to tell you all a little bit about how one can go about writing Linux kernel modules in Rust. I am one of the maintainers of the project. There are other people working on these two, of course. Miguel, for example, is another maintainer and he's actually giving two, he mentored two sessions. And those are more introductory Rust sessions. And if you have the time, I recommend that you go and watch those two later on. So here's the agenda for today. I'll first show you some comments that I'd like you to write on the VM to get the latest version of the code because there are some improvements recently. And I'll give you a little bit of background information about the project, why we're doing this, and a little bit about kernel development and how the workflows function. And just as a background, maybe some of you are already familiar with all these things. I'll try to go quickly there. And then we're going to get to actually writing the module. At that point, I'll switch to the console where I'm connected to the VM that we shared ahead of time. So for those of you who actually downloaded the VM, I can connect to it. If you want to try to follow along and try to replicate what I'm doing, I'd be happy to try to answer questions if you're running to any issues there. And in fact, any questions in general, I'd be happy to try to answer too. And then some conclusions and things to look forward to. So getting started, the first thing I'd like you folks to do, the ones that are actually going to try to follow along and reproduce it, is to boot the VM and log in as a guest. That's the first thing. And then I'd like you to run this command. So the first one is really optional. It's just that I'd like to get a screen, setting vim as the editor for when we're completing things. And then the other commands are for fetching the latest version. I do the depth one here to reduce the amount of things to download. I'm actually going to run these commands here on my VM too. So what I did was I actually booted the very same VM that you all have. So there's no difference. So one thing that, so I'm not going to switch to the console right now because it would take away from the presentation. And I want you to be able to see the commands. So I'll type the commands here just quickly. It's going to be better once I switch. Maybe perhaps I should switch. Let me switch the screen here so you can, there you go. I hope you can see my console now. Let me run some, let me run those commands there now. Yeah. So I just logged in. I did the configuration and I go into minutes. This is where the source code is. And now we're going to fetch the latest version, depth one. So it's fetching. Okay. Now that we have it, I'll do a checkout of that. And let's wait a bit. This can take a bit of time since it's in the VM and there's lots of files. And then what, after we check out what we're going to do is we're going to, so the next commands are the commands to download the latest version of the REST compiler between when I created this VM and now there was a new release of the compiler which removed some unstable features that we were using. So it's good to switch to the new version. Then we do a make conflict to configure the kernel to the options that we want. And then we compile. And then once we leave it compiling, then we can switch back to the presentation and I'll walk through the background information. Actually, while this is taking a while, I'm going to switch back to the presentation. And I'll go back to the presentation. While my checkout is going on, let's go back to the background. Once the checkout completes, I'll continue the comments there. So the first thing I'd like to tell you all about is this project that we call REST from Linux. The source code is available at this link. Anyone can go and check out the source code and continue to so choose. And the goal of the project is really to make REST first class language for Linux kernel development. And one thing that we keep being asked is why do we want to do that? And there are basically two reasons. And the third one listed that is sort of like a precondition for this to be possibly accepted. So the first thing is memory safety. So REST is a memory safe language and there are researchers that have actually worked on formally verifying this property of language. And what this gives us is that we have a reduced number of memory vulnerabilities in new code. So the idea is that once REST is made an official language in the Linux kernel, new code could be written in Rust. And this new code would have fewer vulnerabilities than if it had been written in C. Not because programmers are better or worse, it's just that the language will catch things for you sooner. And this also relates to productivity. The REST type system that REST offers us gives us the ability to actually catch a bunch of of potential issues at compile time. And so I'll give you a brief example of this productivity improvement. It's very simple. So let's say we're calling the function in C that takes as an argument a string. It's a pointer to a char star type. So let's say it's a manifestation of the sub-system and we'll talk a little bit more about what that means. But in C, and in fact, this applies to any sort of pointers, when you call a function, we know that for the duration of that call, whatever passes the argument must remain valid from the point of view of the caller. But the question then arises is when the function returns, what happens to that pointer? We have several options. In this case of string with a name, it could be the case that the callee, the subsystem with which we were registering, made a copy of it, which means that once the function returns, we are free to reuse the memory. For example, if it's a stack allocated memory that we did the ascent printf onto, it's fine for it to be free. So that's one option. The callee makes a copy of it. The other option is that the callee holds onto the pointer, but doesn't take all the shit, which means that the caller is responsible for ensuring that that piece of memory remains valid and for however long the registration lasts. That's one option. And that's a third option, which is that the callee takes all the shit of the pointer, which means that the caller is not supposed to use the pointer anymore and the callee will eventually free it when the time is right. Now in C, when you're calling the function, there is no indication from the type system of which sort of behavior, what should I expect from any given function. And all these three options that I've told you about actually exist in the kernel, you can find functions in the kernel that behave in any of these three options. So it's not like there's a preferred way that everybody does that by convention, at least not in the kernel, that there are all sorts of different implementations. Now in Rust, you actually get this information from the type system. In fact, it enforces the rules for you. So for example, if you say that it's the case where the callee holds onto the pointer and then what it means is that if you actually try to free the memory before it's time, the compiler will say, oh, this pointer that you're passing needs to outlive this registration that you've made. So the compiler catches that at compile time. If there's a copy, then it's the easiest thing, right, that there are no, nothing to enforce. But if you pass ownership, for example, into the callee, then the compiler also enforces that once the function returns, that you're not allowed to use that piece of memory. And we'll actually see an example of that in our code. So this is an example of productivity and it's not only at the time that you were writing the code, because the thing is, if you introduce a bug this way, then it will manifest itself later on, right, in unexpected ways, because you may be corrupting memory, right? So you also have to spend time debugging and trying to figure out where this bug came from, what cost this original memory to be overwritten. So this is the sort of productivity again that we see with REST, not just at the time that you write the code, but also later on, once you start the testing and debugging your code. And then the third thing that we have is that the performance is comparable to C, right? All these things that we say that we enforce, these things are enforced at compile time, right? So the compilation takes longer, does take longer, but the generated code doesn't have it. And this is an important thing because other languages that claim memory safety and type safety, most of them, at least the ones widely used, do this with a garbage collector, right? Which means that we have this extra cost at runtime and potentially freezing threads in CPUs so it can do the garbage collection. This is not the case for REST. The enforcement is done at compile time. Miguel's sessions actually cover some of these things. So that's it for why, for REST for Lint and why we're doing this. So now let's talk a little bit about the differences between user and kernel space. I think most of you and most of us are accustomed to user space processes. And the idea here is that each one of them has a... There are a couple of questions if you want to field them in the Q&A. I can read them out for you. Yeah, if you could. I see lots of things. No, it's all right. Kindly, do you have articles or books about writing Linux kernel in Rust language? Would that be the link to the GitHub or do you have other... Yeah, the best we have at the moment is GitHub. And if you look at samples, Rust, there's lots of things there of examples. There are no books or articles. This is something that will be released in time. It's not ready yet, but we'll read. Okay, then I answered the question with the GitHub link that I already have. Can we debug our Rust code using Intel debugging by using JTAG or USB 3? Yeah, so once the binary is compiled, there's really no difference between Rust code and C code. If you attach to some device with JTAG, you can see the several instructions and you can just use it. The symbols that are embedded also the same format as C. So yeah, you should be able to debug with any debugger. What I usually do is is JDB, and there will be a mention of this later on. Rest of the questions in the chat are more about VM and not connecting to looks like people are helping each other. So you can continue if you don't have any questions. Yeah, so I see some questions about Google OS configs, engaging, failing. Yeah, I think those are fine. I mean, you can ignore those. So what happened with this VM that's like created in a Google Cloud machine and then I copied the image out of that. So maybe it's trying to talk to some Google services, but it should be fine to ignore those. Okay, so let's go back to the presentation. So yeah, so what I was saying here was that we have these processes and they all have their own address space, meaning that in the case of process one goes to some memory position and go a thousand and write something there. And then process two goes to that same memory address a thousand. Then they won't see what process one did because they are virtual addresses and they point to different physical physical machines, except of course, if they arrange for this to be mapped. So but by default, nothing is shared, but if they share it, then they could achieve that code, but by default, that's not what happens. And then processes makes these calls into the kernel. Okay, in the kernel and this is what I'm trying to indicate with this box is that the kernel has its own address space. And then inside you have components of systems, some file systems, some networking, scheduler. And these are all in the same address space. Another thing that the kernel can do is if it's called in the process in the context of process, then it can access that processes address space while in that context. And at the same time, the kernel and this is going to play a role when we get to write our modulator. And then underneath that, you have hardware or hypervisor depending on whether you're in a physical machine or virtualization. And the kernel doesn't allow user space processes to go straight to the hardware, right? It has to go to the kernel. There are of course, exceptions in ways in which the kernel can arrange processes to go straight to the hardware, but those are exceptions. So this is the separation of user space and kernel. I think this is somewhat from the intimate studio. Now briefly about the lifetime of a kernel module, because it's slightly different from a regular program in user space. Like if you think about a console program, like cat or LS, what they do is there's a main function that gets called and then they perform something and then eventually they return. And that's the end of the process. Modules are slightly different. So during boot, if this is a built-in module or when the module is loaded, if it's an open module, there's an elite being called, which is similar to the main in the user space program. But instead of just doing whatever it wants to do, what it really does is it goes to some subsystem and registers with that subsystem. And then the subsystem registers the registration, fails it, and then it completes, which is different from user space, where main runs for the duration of the program. And this is the initialization of the module, just registers the subsystem. Then some actor, it could be a hardware, it could be a user from user space doing some action, but it triggers some action on the subsystem. And the subsystem figures out who needs to handle this. And if it's this module, there will be a callback that gets issued to the module, the module does whatever work it needs to do, and eventually returns the subsystem that returns to the actor. So basically the idea here is that at steady state after registration, and at this part actually can happen several times. Okay, so there's some action happens, subsystem figures out that our module needs to handle this and call that. So the modules work this way, there's the register, and then they get these calls occasionally. And then if it's a local module and can be loaded, then there's an exit call into the module. The module unregistered from the subsystem comes back, and that's the end of it. So we have an init where it registers. We have a set of callbacks while the module is loaded and running, and then eventually we have an unloaded that result on registration. So here's what we're going to build. Okay, so it's a kernel module, so it's going to be within the kernel. What we're going to do is we're going to create these virtual devices called scones, called 012321, and the name actually comes from the driver in the Linux Device Drivers book. So the idea is, so the module loads, the module registers all these devices and writes some callbacks. And then when some process comes around, and that's, for example, a cat into scale 0, there's an open issue for scale 0, there's a write, and this write here, you see that there's, I added some data here, so a write carries some data. Okay, so when that write comes in, the data is actually in the process 1 address space, but the kernel makes a copy of it into the kernel address space and stores it in that school 0, in this case. And then there's a close comes to this process, then the process goes away. Okay, now note that data stayed behind, even though process was destroyed and there's no write or space, there's no more data in process 1, there is no process 1. The data that it wrote is still here, it's called 0. Okay, and then there's a process 2 that comes along and does a cache, now it wants to read from scale 0, and there's an open, there's a read, and this read, when it completes, it actually turns to data that process 1 had written before, even though it's gone now, especially if it closes. So this is what we're going to build, okay, a module that creates these virtual devices and processes can write to these devices, and whatever the writes are going to stay there, right, and then other processes can read, but actually one thing that I should say is that only the last write stays there, and this is different from a regular file system, because we're not persisting this, okay. So if we reboot the machine, then all the data that we had stored here goes away, because it's just a memory, it's not persistent anyway. The development workflow briefly, we built a easy box image, you can take a bit as a timely distribution, and then this is the icon for new games, so we'll add it to the code, make some changes to the code, and we compile the code, and we get a Linux image out of that, and then we use QMU to actually, so we feed into QMU the Linux kernel image, and the busybox image that we had built, and this allows the code to run, and then if we have bugs and we need to debug something, then we can use GDB, this is the logo for GDB actually, we can attach to QMU, but we don't really attach to QMU itself, but to talk to the emulator, the emulator actually has a GDB stub that allows us to attach to the VM, so we can pause the machine and we can set breakpoints and watchpoints and step through and get call stacks and things like that, and this is the workflow, okay, and then we come back to modifying the code, and then we compile it and then run QMU, so this is the flow, it's slightly different from user space, because user space, once you change the thing in your compiler, you can just run it, right, here we can't really run it directly, so we need to have a tool, like when you are putting it on a physical device, and this is what we're going to do, actually the checkout that I had done has actually completed sometime back, so I would like to start building the code, because soon we're going to need it, so let me switch back to the VM, and we're going to issue the, we're back to the VM, and let me find the commands again here at the front, actually, so one thing that I will, on source of, we can go to this documentation and we can copy the commands from there, and this is the thing for us to set the compiler, we're going, now here it's downloading the latest version of Rust, it shouldn't take long. But we have a couple of questions, if you would like to feel it. Well, I'll be waiting for this, yeah. Okay, one, the first one is, I'm a senior year student, I like to build something related to Linux kernel, is there something you can suggest that's beginner friendly? Beginner friendly, so one thing that I suggest to that person is to join, we actually have a Zulip chat server, where people can join and speak to us, not just be the other maintainers in the project and other people involved, and we can discuss more their options, they can also go to the link where there's the project, and we have issues, bugs there, and some of them are tagged as good first issues, so this is actually a good source of work for beginners, and then of course they can do more complicated stuff after that. So the question, the commands you're typing there, are they part of the cheat sheet in Google Drive? No, so these commands here are actually at the start of this like that, so that can be shared in the beginning, if you go there you will find it there. So the other question, second question we have is productivity last, following Rust rules make code bigger than the respective C code, which I agree is needed to achieve memory safety, but I won't want this bring productivity down, at least initially. No, I actually, so I don't have any C code in the slides, but one thing that I can tell you in terms of code size, it actually brings code, it condenses actually, the code is much smaller, and one thing that makes it smaller is the error handling, and we'll see that very vividly. There are two aspects of error handling, there are importance in Rust and make things much smaller. One thing is that you see what you have to do is you have to assign the return value of a function to some variable, and then you see if it's less than zero, and then it's less than zero, then you do something, and that something maybe some cleanup, and then return, or maybe a go to some label that doesn't clean up, and then exits. So this is the first aspect, there's this assignment, and then this if condition. In Rust, we actually have this syntax trigger, which is a question mark that tells us that this does that check for us. It says, if there's a question mark at the end of an expression, expect it to be an error code, not an error code, but something that may return an error, and if it's an error, it returns right away. So we already have the elimination of this if statements, at least from the code itself, and the other aspect of this is that the drop traits in Rust allows us to clean up automatically. So when we see a question mark and we return, return right away, everything that we had done that needs to be undone is automatically undone, right? So all these error paths that we see a lot of in the kernel where we actually have like a label, and then clean something up, another label, clean another thing up, another label, clean another thing up, and then a return of some error code. This means that there were three things that were initialized, and an error path need to be uniditialized in reverse order, right? So we actually have as developers to do this in reverse order and have to be careful about that and make sure that we never forget to undo anything like that. So Rust actually does away with all this, right? We don't have to do any of this stacking in and unstacking of state that's automatically. So the code is actually quite smaller, as you'll see, and of course there are things that people need to get used to, and I think that's easy enough to get to. So in my experience, the code is actually small. One thing that does happen is when we create the abstractions, then those are a bit more complicated, right? But the idea is that we'll build the abstractions once, and then people will just use the abstractions, right? The implementation of these abstractions is not something that's going to happen in a lot of time. Okay, somebody copied this in the comments here, and they said, thank you, thank you for that. Chris, if you have any other questions? Yes, there are a couple of more questions, and I don't know if this is relevant to this presentation, but I will read that out anyway. Kindly, can you give us more code about writing kernels, including the MPI, message packs, passing in parallel processing? Give more code. I mean, I'm going to show you some code. I'm going to write some code here, and I'm not sure there will be anything specific to what the person asked, but join Zulu or send us an email to the mailing list, and we'll try to find you something. I'm sorry if I don't have anything now. That's it. Thank you. And there is another one. Can I build Rust for Linux using a Rust compiler as shipped by a modern distro? Depending on the latest and greatest compiler from Rust app, seems odd. Yeah. So the problem at the moment is that we actually rely on a bunch of features that are still unstable, and by unstable, what it means by unstable is that it subjects to change in the future. So we actually have to track a version, and the idea is once we get rid of all these unstable features by either stabilizing them with the Rust folks, or just finding different ways of doing things, then we can stay pinned to that one version. And then when we reach that point, then we can just use whatever that version is, and those will also be available in distributions. So at the moment we can't pin, but in the future we will be able to. Okay. So let me, was there anything else, Shwadad? This one, oh, looks like there is one more. How do I join Zoom Server for beginners? Oh, yes, I posted these two session links for Miguel's sessions in the chat, or you can also check that on our event webinar site. So go ahead and do that. The other thing, somebody is asking the link for the Zulip chat. That's, I think we can supply that later. Let me see what other question we have here. Looking at drivers, okay, looking at drivers char, hardware, random BCM driver, wondering if there is the new coding style for us. Okay, so I think the question is, I'm guessing the question is that just like the C coding style guide we have, is there really next, I mean the Rust coding style guide for Karnath? No, so for the, for Rust code itself, we just follow the Rust format style. The style itself is the same. There's nothing specific to the kernel. We do have some conventions that we, that we follow. For example, if we have unsafe functions, then we require that the documentation block in the function contains a section that describes what the safety requirements are, in the sense that what are the three conditions that call us satisfied to safely use that function. And then when people actually call these functions, then we require an annotation there that matches the requirements and explains why the caller is satisfying those requirements. So we have conventions like this, and those are described in the documentation, Rust documents there, and Miguel actually covered those too in previous sessions. And I think that's it. Of course, we have like design guidelines where we say, we don't want people to go straight to C code, to C functions, because those are unsafe. So what we'd like to see is that abstractions, zero cost abstractions are built around the C functionality that actually provide a safe way for people to call into that functionality. And we've done that for a few subsystems. And in fact, what we want to do over time is to work with the maintenance of each subsystem for them to build the abstractions in Rust as well, for the subsystem so people can write code in Rust for those subsystems. I'm going to switch to the presentation. Yes, there are no other questions Miguel, go ahead. Let me switch back to the presentation. There's just a few more slides and then we'll go into the code. It's already running short in time. All right, so this is where we were here briefly. So the kernel actually has lots and lots of configurations, has at least hundreds, maybe thousands of configurations. And what we want to do is we want to be able to boot quickly and we want to be able to build quickly. So what we've done was we wrote this minimal configuration needed to build and run QMU and BZbox image. So when we do build the configuration to build the kernel, we use this all local big thing that we say no to everything. And then this actually enables everything that we need to boot with QMU and BZbox and this enables Rust. And part of what we're going to do today is adding another module, the scale module to this and we'll have to enable this and we'll see it in a second. And then the last slide before we go into the code, there's some information about the setup. I have teamwork installed. I have Neo Vim and these are optional. So really what I wanted to say is these are optional. The reason I use this is because I want to have different windows and you'll see how, because this actually is my setup. This is how I was on a database. Somebody unmuted wanted to ask some questions. So Neo Vim, the reason I use Neo Vim instead of Vim or something else is that it actually has LSP support, which allows us to use Rust Analyzer, which actually allows us to get completion and documentation of functions as we write code. So let's get into writing the current module now. So we're going to do this in 16 steps. I'm not sure we'll have time and that's okay if you don't have time to do it all, but I'll try to go through these steps with you here. And I actually shared a link in the end of this slide to these steps already implemented and you can see that offline later on. So let's do one last switch back to the console and we'll get started. Okay, so here we have the console. Let me run teamwork. So here it actually gives us the ability to have several windows. So I have windows here I have another window here. So I have two windows and you can see the star here down the bottom tells us where we are. So I'll keep switching between these windows. So the first thing we'll do is as I said, we have several config options in the kernel. So what we're going to do is we're going to add a new one for the sample that we're going to write. So the place we're going to go is the samples rust, which is the directory that I mentioned before. So let's go to pay config, which is the file that has the configurations in them. I'll just copy one of these and I'll paste it down here. And we're going to rename it to sample rust scale. And this is a string that describes it. Let's say it's a scale module, first scale module sample and stop. Okay. So it's copied, pasted, and I changed some strings in it and saved it. Now we opened a made file. So that first capability thing that we did was just to add an option which appears in the menu. Now we changed the make file saying if this rust scale option is enabled, then please include rustscaled.ot in the build. Okay. And if you do it there, this is what we have. And if we try to make the kernel from this, it's sort of failed because we don't have a rust scale. But one thing that I should say is that up to, actually it didn't fail because it's not configured. So let's configure it. A menu config is actually brings up a menu that has the list of, has a hierarchy of options for the kernel. If you press slash, you get this search. And if you type the skull in it, it finds to see the rust simple rust scale and the information that we actually typed. And that's a one year. So if you press one, it'll try to take you there. But it can't go there because sample kernel codes are enabled. So we enable it and we go into it. Rust samples, we enable it and go into it. And scroll down. And then here we have the skull module that we've just added. So we select it, then we exit, exit, exit. And okay. So now we've labeled it. And if we try to build it, now it's going to fail because it can't find the rustscaled.ot. And in fact, the steps that we've taken so far, nothing is different from C. C would be exactly the same. So what we're going to do is we're going to create an empty file. And now this is the one difference from C is that in C you would have used a .c file here instead of a .rx file. But now we compile. See, there's a warning complain that we're getting documentation. Otherwise, the current compile. So this actually was the first step that we had in writing the sample module in Rust. So what we're going to do is I'm going to commit this as step one. Okay. It's complaining that I haven't set up my email and name. So I'll do that. Okay. So now we have our first step. Okay. And it's, of course, empty. So it doesn't do anything. We can boot this, but it's not interesting. So let's actually start and write the actual Rust code. Before we do that, one thing that I'd like to show is that we have this make Rust analyzer option that when it runs, it creates this JSON file here, rust-project.json, which is used by Rust analyzer to find things. So it has the source code and all the conflicts for everything. So I'll go to this other window here and then I'm going to start me again. And I'm going to open that file that it's called rx, which is empty. Okay. But firstly, we're going to do, it's similar to a valid include in C. But if you see now, there's essentially already an improvement I feel because when I did the column, it's already giving me some options of things that I can type. But what I'm going to do is I'm going to use the predate and start. The predate is basically a set of things that most drivers use. And then again, as I start typing module, which is how I'm going to determine module, I see like, if you look here on the side, there's even an example of how to do a module. So what I'm actually going to do is I'm going to copy from that. If I manage to copy from it, it's fine. I'll just type it. So what I do is I have to specify a type, a name. So if I try to compile this, it's going to complain. I'm going to try it here for you to see it. If I try to complain, it's going to complain that skull as a type, this is not the form that's called. So I actually need to implement the type called skull. So I'll just do a struct called skull and a tentative. Try to compile it. I thought I didn't say it. When we try to compile it again, it's going to complain about different things. It's going to say that it doesn't implement the trait module. So let's do that. And in fact, about traits and interactions in production to Rust, I again refer to Miguel's sessions, but basically the idea in Rust, and this is a pattern that we'll see throughout these sessions already called in Rust, is that usually what we do is we digress some type and then we implement some traits. And this is similar to what we see in C, but it seems slightly different. What we see in C is we have these operation tables which are structs with pointers in them for functions. So you have, for example, file operations. This is an example of them. So there's a similar theme here in Rust, except that the ergonomics are slightly different and more condensed and easier to implement, I feel. So let's do this. We're going to say that we're going to implement the kernel module trait or scope. And then one thing that we can do after we do that, we can do this GAE and then, which in the options down here can say implement meeting members. And the thing appears automatically for us that is indeed. I'm going to remove this thing here because it's not needed. And if I try to compile this, I should compile it. Yes. There are warnings about the names that were not used and still that missing documentation. For the names not used, conversion in Rust, and it's not specific to the kernel set view. You can prefix them with an underline, you change that, yes, I know they're not being used. And the other warning is about missing documentation. So we'll add a documentation here for this module. See, it's a scope. Okay, so if we try to build this, now compiles, there are no warnings. And in fact, we can try to run this kernel. And if you run this here, oh, there's, what's going on here? Oh, it panicked with not yet implemented on like 15. Because the thing is, when we ask it to implement, it doesn't know what an implementation needs has to be. So it adds this to do, which is a macro in Rust that results to whatever your result needs to be, but panicked at runtime. So what we need to do here is we need to say, okay, and let's call value. And this call is an empty struct, it's called. So let's cancel this, let's make again, and let's try to run this. Now, this time I expected to run, but we won't really be able to see anything different. It was good that that to do was there initially, because we saw that the address code run ran, and we saw that it panicked because there was an implementation missing. And now like the kernel booted and nothing happened there. So we're going to throw off. And I'm out of that. So this was our, that one. So let's commit this. So we can and commit. Next step is actually a quite simple one. And we just like to print a hello world when the kernel is booting, right? And the way to do that, here is this PR info micro, let's say hello world, bringing it, right? So if we try to build that and run, when we boot the kernel this time around, we're going to get a hello world message there. And we're going to be building. And that's, that's run for you again. When you hear the runs, if we scroll up a bit, we'll find it here. It's prefixed by a skull, which is the name of the module, and then the hello world message that you wrote. So now we know that our module is running and we know that we can print something to the, to the kernel log. One thing that I should mention here is briefly, is that this is similar to the T version, but this is actually quite safe, right? In the C version, since it's pretty, pretty hostile, you can do percent P or percent S and passing that point or something. And, and that may potentially crash the kernel or create some, well, we could do some, some problems, right? And in Rusty, you can't really do that. You don't specify the type, right? You could do something like, you could do something like this, okay? See, something like this, okay? This would print a hello 32 world, okay? And we don't specify the type here, but this is a macro. What happens is that this gets expanded to something very static, but that's not good at now, okay? Okay, so, so this is set three. Let's save this here. Now, the next thing we want to do really is, is, you know, in our driver, when they showed you the background before, what we wanted to do was, we wanted to have some skull devices and to be able to implement things, to do things when, when users opened and wrote and read to it from cloud. So what we want to do is we want to come here and say we want to use kernel file. And we went to, as I said, there's a theme here. We want to do file operations for skull, okay? We want to implement that. Now, one thing that I, that I, that I said, I liked about root analyzer and new theme is that you can actually ask for, for help. So if I'm hovering here on top of file operations and I press shift K, I actually get a, a description of what that, that is in essence, it corresponds to struct file operations. And if I do control right, right square, square brackets, it takes me to, to the definition of that thing. All right, and then in the definition, I'm going to copy here this open function and I'm going to basically, what function that we have to implement. The type here is called golden. And here, we're going to do this. Okay. That was open. So, so we, we're implementing file operations and we're saying that when, when the open gets, gets called, that we want to print a message to the law against saying a file is open. So let's try to compile that. Let's see if I, if they mess something up. Oh yes, I did. You actually need to add an application here that says that you're going to build a V table out of that. You're going to build a file operations table from this. And now it compiles and it also has warnings about the argument something used. So I'm going to do the same thing that we did before. We're going to prefix it. And then if we build again, there's a compile and have no warnings. Now, if you try to run this, and I'm not going to run this in the interest of time, but if you run this, there's actually no file for us to open, right? And I should, I'll save this as, as step four. But, but if we actually run this, there is no way for us to get this open to be, to be, to be called, right? For example, if I look at that, there's, there is no skull there. And there isn't, it says, we should recall from, from, from what it said about the lifetime of a module, what happens is it gets called. And then we have to register with some sub system. And then we get called later on. Now, here, we actually have the implementation of open, but we didn't register with any system. So let's, let's, let's register with some subsystem. Okay, so subsystem, we're going to register with this, this, that miscellaneous device, it's basically a software software, a device on the seaside. So what we're going to do here is we're going to say we have a registration, registration, skull, and we need to give it a name, skull, and some data, and the data is going to be nothing. I'm going to try to compile this to make sure, just make sure that I, that I have everything. Yes. And this actually is the first appearance of something that I mentioned before is this question mark, right? So what I'm saying with this question mark here is this registration that I'm calling new, new paint, this actually may fail. And if it does fail, then I just wanted to, to return right away and fail my need. However, if it does succeed, then I want to do the success part of the results to be stored in batch. Okay. Now, one way to, if I didn't really didn't want to do that, one thing that I could do is I could explicitly check, right, if branch of this error, and then I could, I had some error handy here return, or do something like if left error, be, or actually do something like that, and have my error handy here. So like I could do something like that. And you'd have to do something like that here in C, right? You actually have to check to return that. But in, in, in Rust, actually, let me show you something interesting in Rust. If we, if we don't leave the question mark here, and we don't check for error, it's actually a complaint that we never use in Ranch. So I'll come back to this. I'll show you another one that it gives. But anyway, let's put the question mark and I'll come back to this and I'll show you the other one later on. So here, so, so we've registered, okay, with, with the subsystem. And if I try to make this it's still going to be a warning, but, but it's going to be okay. Now, if I try to run this, it's still not going to give me the file. And the reason this is this. Remember that I said that most of these types and especially things like these registrations, they have a, this, this drop implementation that, that undo whatever this needs to be undone, okay, for error handling. So the idea here is that when I, when I'm trying to okay, right, that's the end of the scope for registration, right? So registration is going to unregister, right? So we are both registering and unregistering during our, our init. So what we need to do really is we need to hold on to this registration and only unregister it when, when the module is unloaded. So what we're going to do is we're going to save this registration here. But there are a few questions that are probably relevant now for the person for the what's happening now, your presentation. So I think somebody is requesting you to speak a bit slower. Looks like they're having, they think it's fast. I suffer from the same thing. So I totally get that. I speak fast. So that's one request. Somebody was asking if these comments, I'm assuming the code you are doing right now, are they coming to upload somewhere? I'm guessing now you were actually doing the live coding. Yes, yes, I'm doing it live. But there is, so one thing that I did do was I prepared this ahead of time and I've committed a similar, very similar version to this. And there's a link to it in the slide deck towards the end. And then the people can go there. It's in GitHub. Okay, so there is a link in the slide deck that you can look at refer to when you are listening to this again. I'm pretty sure you'll be listening to this. I'll be listening to it a couple of times. So I get this right, really while playing with it. So yes, so there is one. And let me see what other question there is that could be more than okay. Can we just build a module for load, unload and not build the whole kernel all the time? Meaning I think just the math. We can. Yes, we can. And in fact, this is the last step. The reason I'm not doing it this way is because our image is fixed. So what we have to do is this, we have to build the... Is it static module? Yeah, we can build the module as a KO. But then we have to copy this KO into our image, rebuild the image. And then so the idea is basically when you go to QMU, this new code needs to make it to QMU somehow, either through the kernel image or through the busybox image, one of the two. And I didn't want to be rebuilding the busybox image all the time. And this rebuilding of the kernel, if you look at it, it's just relinking because the pieces that were compiled ahead of time are compiled and we're just reusing that. The thing that we have to keep redoing is really just building that one file that we are changing and rebuilding. And then building the entire image. Yeah, that's right. Perfect. Yes, yes. The linker links the whole thing. That's why it takes a little bit more time. But yeah, if we have time, I don't think we will, especially if people are asking me to slow down. But if you look at the 16th step was to actually, instead of compiling the thing built in, we would compile this as a module, which would generate a KO file that we had to put into our image. And then we can do InseMod and RMMod. And in fact, we could see things being like devices appearing while we do InseMod and then disappearing when we do an RMMod. But yeah, so it's possible. And it's something that we're going to do later on if we have time. But anyway, even if you don't have time, it's in this slide. So there is one question. Is it possible to make an outer tree module with Rust? I would think the answer is yes, right? Yes, yes. Yes, it is possible to do that. If you go to the link that has the Rust for Linux project in it, up to Rust for Linux, and then there's linux.get. If you remove the linux.get, if you're meaning that you go to one level above, you have another project in there, type by type to Linux, which is an outer tree example that you can see. The one thing about the outer tree example is that you get built, and this is true mostly for C as well, you can't just build the thing for any kernel. You actually have to build a kernel once. With support for Rust, right? And then you can build outer tree for that kernel. So yes, it's possible. But you have to be careful that you have to use the kernel that was built with Rust support enabled when you're building this outer tree. So yeah, it's possible. Okay, so I'll start with the questions related to what you're presenting now, and there are some general questions in the Q&A we can get to them later. Yeah, go ahead. So where was I? So yeah, so I was trying to stick to save the registration that I had here in my thing. Actually, before I go there, one thing that I'd like to show is this. So I've added a field to the struct, okay? But here I'm not initializing that field at all, right? I'm just saying an empty struct, and I'm using it as an empty struct. If I try to compile that, and this is something that is different between C and Rust, is that here the compiler is complaining that we're not initializing underscore that, right? So one thing that Rust enforces is that if we have a struct that all fields of this struct be initialized, there is a way it may be unknit for you to say that, oh, certain things may not initialize, but that means that when you want to use it, there's no way for the compiler to know that whether it was initialized or not, so it becomes unsafe. So if you just want to use save stuff, then you have to initialize everything. So what we're going to do here is this, I actually need to do it in a real line, so we're going to say that is initialized to this registration here that yes, let's try to compile that. And now it compiles and there are no warnings. And what it means now is that when we initialize this module, it registers with risk that, oh, one thing also that I should say is that since I'm using Ratch here, and it knows that the type of Ratch is the same as that, which is specified here, which already talks about sculvia, I can actually remove this specification of the client because the compiler can first save this and I make it again. But there is one question probably relevant here. Can you please explain why new bind is used? I know how to use mist tab in C, but when there's new to me, I guess it is a Rust memory pin. Yes, so there's actually, there's a whole presentation where I spend maybe half an hour talking about this, but I'll try to be brief here, and I'll post a link to this other presentation where you can look into all this. But the idea is that in Rust, all variables are by default movable, which means that if you declare something, let's say there is a mist tab registration on the stack, and then you can initialize it, and then you can move it to some other location or to some other stack position. But anyway, you can move it, and moving in Rust means copying bit by bit, all types are movable by default. Now, of course, this tab doesn't, once it's initialized, it doesn't allow us to move things because there's a list head in it that points to itself as self-referential. So if we have something that is self-referential, we have some memory here that's pointing to itself, and then we get the bits of this memory and we move it elsewhere. They actually now, they are pointing back to where it was initialized. So what this pin here means is that we're saying that once the mist registration is initialized, it cannot move. So the pin here just means that it cannot be moved. And box means that we are allocating memory for it. And that's the way that we pin it. I'll share our link to this thing where I talk about this. And I actually talk about new pins. There's a different way for you to do this that doesn't involve allocation, memory allocation, but that involves unsafe. What I'm trying to do in that presentation is actually try to talk to people in the language or in the libraries to provide a way for us to do it without having to be unsafe. So there seems to be a related question in the chat. Should we pin all pointers in kernel and how Rust reacts while compiling if this contract is not met? Oh, so if you mark something as pinned, then if you don't meet the requirements, then it doesn't compile. The compiler actually tracks that. The compiler is able to do that, which means that you can never get a direct, beautiful reference to something. So you are not able to move it because there's no way to get hold of the thing for it to move. The only way to get hold of this thing that you're allowed to move is through an unsafe operation. And that unsafe operation has as a safety requirement that you may change things, but in the process of changing things that you don't move anything. And you have to honor this requirement. Otherwise, your stuff is unsound and it may break at runtime. So yeah, so the compiler will enforce it. So the thing is the compiler will enforce it, but it will enforce more than it really needs to enforce, which means that even for legitimate uses, you need to use unsafe. So it looks like another follow-up question. Okay, let's say I create my own buffer. Okay. So what? But what about the, so actually, what if you create your own buffer and then you put it, you want to put a misstep, registration in that? Is that, is that a question? No, I think so. Michael, you can unmute and ask the question if you like. Yeah. Okay. Thank you. Let's say I create a buffer to store some data, then I need to pass this data down to the, some, you know, pipeline of the functions. And what if I just create this as a box pointer? Or I don't pin the pointer really. Can I get any bad, you know, kind of consequences at runtime? Or, or it just relies on the fact that the API cycles sooner or later, they actually require the pin pointer. So I have to convert it to pin before I, you know. Yeah. So, so, so the idea is that if the API that you're going to call requires pinning, right, then, then in the API signature, that will be, there will be a pin thing there, right, which means that if you try to call it without pinning, then it's going to fail. It's not not going to compile. Okay. Now, if the downstream thing you're passing to doesn't require pinning, then it's not going to be part of its signature, then you can just allocate it and pass it there. And so as an example, if you, like, if I go to the definition of registration here on my screen, if you look at the register function, here it is, you see, like, there's a pin here itself. You see, so here I'm not even saying, so not me, like the API, which I haven't to happen to have implemented, but if I say, I need, it needs to be pinned. It doesn't matter if it's, if it's a box, if it's a stacked variable or if it's a reference counter thing, it doesn't matter, right? But it needs to be pinned, right? And, and the implication, of course, is that this API requires it, right? But if the API didn't, then you could just save it on itself and then take anything, including pin things. Does it, does it make sense, Michael? Yeah, he said, thanks. Got it. One more question. The lifetime of this in heap is until un-pinned or end of the program. I'm guessing they want to know the lifetime of this heap. So, so the lifetime of, of, of the box itself is, is, is whatever the lifetime of it is, right? When it goes out of scope, then it gets, gets freed. The lifetime of the pin is, is the lifetime of, of the outer thing. So the requirements for pinning is that you don't move it, right? Before you drop it, right? And drop is when it's about to be destroyed, right? So it's about to be destroyed, then the drop gets called. And as a drop gets called, now you can, now it's not pinned anymore. You can do whatever it is that you want to do with it. And in the case of, of registration, for example, the drop of registration is to unregister, right? Which means that those self-referential pointers are unlinked and it doesn't really matter if you move things because those are not initialized anymore, right? So, so it's up to the drop point when you stop using something. And then the pin goes away and you can do whatever you want with it. Does it, does it make sense? I hope, I hope it makes sense. If it doesn't, I'm happy to try to try it right now. Yes. It looks like it made sense to the person that asked the question. And there is another one, how can I enable this call module and make when you can config? Is it part of the config? I guess, you know, that's part of it. Right. So, so if, so if you, if you come to make a menu config. It would be the same way you would enable any other criminal module. Yes. Yes. It's no different. It's no different. No, it's exactly the same. Right. It's no different from, from C. I could say like it's called module here. Then it's, you can change it here to a star or not a star or something. Okay. Sorry. I think we'll take the questions, the rest of the questions later. Let's continue. Looks like we have about 15 minutes left. We only have 15 minutes. Yes. Let's see. We did four steps over 16. So, okay, let's, let's, let's try to compile this. Okay. Looks like it's right about. So, if, if we, if we run, kill me now. Now, we have a, a, that's called because now the, the registration went through and the state alive because we started away in our state after, after initialization. Okay. And if you try to do something with it, try to do a cat, right? Then that message that we had printed to the log file was open appears because the file gets open, right? I never told you when there's a casters and open, then I read them close, right? The open was called and, but then it failed, right? And if you try to write something to it, again, the file was opened, but the right, right call failed. And the reason for that is because we didn't implement the read or write, you just implemented the open. Okay. So, actually, let's save this as, actually, let me see what is this. Step five. Next, let's, let's, let's try to, to implement the, the, the read and write. Let's try to do it quickly. Back to the facts. So one thing that I can do here is I can come to operations definition and then I can, I'll copy the reason right here. I'll change some types here. simplify things. Come back to this. Add the, the name space here. Come up here. And then we have this IO buffer writer and reader, which are not used or imported yet. And they are from this IO buffer reader and IO buffer writer. Okay. So again, so we had the file operations block. Okay. So it starts here and it ends down here. And initially we only had open. Right. So what I did is I declared read and write. Right now they're empty. If I try to talk about this, it's not going to compile because I need to return something. So what I'm going to do is I'm going to do a PR info here. Yeah. I'm going to say if I was read and I'll do something similar here, but I'll say if I was written. And what we, what we have to return is, is in the, in the read cases is how many bikes were read. So I'm going to say zero bikes were ready or read meaning that it's nothing to read in the file. And in the right case, instead of saying, because on the right case, if you say zero, then, then the colors are going to try to keep calling right to see if they can progress. So, so we're going to lie and we're going to say that we've written everything that they asked us to write. One thing briefly here, small difference within this, within Rust and sees that the last statement in, in Rust, if you don't add a semicolon to it, then its value is, is, is the return value of that block. Okay. So this is equivalent to doing something like, let me come back here. This is equivalent to doing something like this. Oh, okay. We can remove the semicolon and you can remove the return and that's it. Let's compile this. Let's see if it compiles. Yep. I think compile. I try to run it. And let's, so what, what I expect to see is that this, we're going to see a file was open, file was read, file in the case of reading and file was written in the case of writing. Let's see if the thing is there. If you look at it, you see we get file was open and file was read. And if we try to write something to it, the file was open and file was written. So this is good. I think this, this was two steps. There is one last step that is very interesting that I want to show you before we stop since we don't have much time because this is actually quite a difference between Rust and C and it's, it's a case in which you'll see that Rust actually saves us from potential use after free. So let me first save this state. Where is that? So six and seven. Okay. So what we wanted to do really was when we, in each device, if you remember the diagram, what we wanted to do there was each device at its own attached data that we could, we could return on, on, on, on read. And, and then when the write was called, we could update it and then on read we returned it. So what, what we'd like to do is we should have something, to do something like this. We have, we wanted to have some device state, right? So we say like there's a device number, which is, which is our year size, and we have some contents attached to it, which is a vector of bytes. Was this something that, that we wanted to have? Okay. And, and really what we wanted is, is that when, when open gets called, that we get a pointer to this, to this device that, that we've allocated, right? So this device information, an instance of this device information is attached to the registration. Okay. So what, the way we do it in file operations is we have this open data type, right? Which is, which at the moment this is defined as, as, as nothing. What we're going to say it's a box of device, some, some memory allocated for, for device. And you see there is, there's no pinning here, meaning that it's not a requirement that it be pinned. Now, when, when I do this, and my open gets called, then the type of open data is box device, right? So I can say file or device context number. If I try to compile this, it's going to fail because when I register, I actually have to specify what it is that I, that I want to have attached as data to my, to my registration. So here I need to, I need to specify some device and then device I can create a new one. And number, let's call it one and context, let's say it's a new one. Okay. And note that I have a question mark here saying that this may fail, there's an allocation involved. If the allocation fails, I return right away there's nothing to clean up. Now here's, there's also a question mark here, which means that if this fails, I returned right away, but I actually have something up to do. I actually will free this allocation that I had done here. There is no such thing in C, right? In C, what you have to do here is you have to do a manual, like if, if this, this failed, I have to, to free this memory explicitly here before I return. And I may forget, and then I have a leak. So let's save this compiler. So let's see if I miss something. There's a warning saying that you might never use, use contents, which is fine. Okay. Now, so this, this, this was one step. And now this is the part where you'll see the difference between like one big difference between C and, and, and rust. The, the, our open is, is being called, right? When it opens being called, we know that the registration is active and we know that the context is available. Okay. But during the other calls, we actually don't, don't have access to this open data. All we have access to is this data here that is, is returned during open, right? And we want them to be the same thing. And they can say the data that is attached to an open file is a box device as well. Okay. And this means that now this is not in, make the couple more. Now this is a device, a reference for device. And this is a reference for device as well. It's all good. Now, when I return, I have to return contacts here. So let's say, let's say I try to do that. Okay. And see if I, if I match anything out here in the syntax, but okay. No, so my syntax is fine. Now this is the, this, this is an error that rust gives us that, that C wouldn't. Okay. So here's what's happening. My open, open function has been called and I've been, I've been given a pointer to a contact. Okay. And I'm trying to, to return that and store it away for later use in, in read and write. Right. But what rust is telling me is that the return type expects to own a context, right? But you gave me just a reference to, to, to a context, meaning that we borrowed a context. So this is not allowed. Right. It's complaining about the type. So you can say, oh, if it's because it's referenced, let's try to re-reference that. We could put a star here and try to compile again. And then what's, what's going to happen over here? We're losing the message because the thing is messed up. So I did a reset and that's one of the things. And now we get to see it. It's saying that context, the box device, it's not, it's not a copy trait that does implement copy trait, meaning that you can't just make a copy of it. Right. If this were, for example, an integer, right, then we could just copy the integer. This, this would have worked fine. Right. But this is an allocation that has a pointer to it. So you can't just make it, make a copy of that pointer and then you have to do it. So, so this doesn't work. And this actually saves us in Rust because this is actually a big source of, a big source of bugs in the current general because what happens is when the, the registry, so a, a registration of a MISC that can actually go away, meaning that no new open calls can be made. Okay. However, files that were open can remain open. Right. Which means that files that are open, if we were allowed to just keep a copy of context here, files that, that were opened and are read or, or written would be accessing freed data, right, the freed point. Right. So this is something that Rust protects us from, doesn't allow us to do. Right. And this is actually an existing problem in, in, in C codes in the kernel. The solution here really is for us to, instead of using just a, a box is to use a ref. So just, just a simple location of devices to make it ref counted. But, and in fact, Rust actually makes things easier to, to the rep counter. You can see what you have to do. You have to embed some states within the destruct that makes it ref counted. And then it's always ref counted. That's not the case in, in, in Rust, in Rust. What we can do is we have this site called, it's arched in regular Rust put in the kernel for ref. Okay. We managed it with ref counts. You're saying it can do that automatically. And with the Rust kind of helps you. Yes, because we handle all of those things freeing and such with the DevM resources. That we don't have to explicitly free them in some cases. And DevM comes with its downsides to that. Yes. The, the when it gets released can be problematic for this particular case that you're talking about. Yes, exactly. It's exactly it. We use that then in C on the C side. And you say, okay, we've solved the problem. But during unregistration, there have been free stings, right? Right. We can have file pointers, but still around try to, so even our attempt to, to fix this in, in C didn't really cover this. It's multiple. Yes. The lifetime, research lifetime is the complex model to be able to manage. It goes to research like times. Yes, exactly. Yeah. Yes, we have the same page there. So let me just finish this. It's, it's, it's quick and then we can open for, for, for questions in whatever few minutes we have, if there are questions. We just have two minutes, actually a minute, but let's wrap up your presentation and let's do that. And the questions, I think mostly our generic question asking status about the rust code in the cardinal, they can come and contact you on your chat. Yes. Yeah. Yeah, I'm happy to try to answer questions there. And, and, and yeah, I don't think we went, as far as, as I wanted in the, in the, in the code lab, but I just think I got to show an example of a place where rust saves us. There were a few others later on, but anyway, people can see it in the, in the presentation. Yeah, this, this, this was it. The latest, latest changes that I made, made the device record. I just replaced a bunch of boxes with rust. Yeah. Thanks. Thanks, everyone. If you have questions, yeah, send me messages. I'll, I'll, I'll, I'll try to, to, to answer them. Were you able to power whatever you wanted to power red sun? Or are you kind of should we continue or should we continue a few more minutes or should we start here? So, yes. Candice, well, how are we doing on time? Depends on how long red sun is that are you looking at five minutes? No, yeah. So let's go back to the presentation. I'll just say a few things that. Yeah. Yeah. Okay. I'll let me switch back there. Okay, go ahead. Yeah, let's switch it. Okay. So that's stepping over all this. What we should have done this is create the thing that I described in the beginning and the code is available here. This is, this is the link that I, that I told you all that contains this steps implemented. So if you want to see all of them completed, this is, this is right where the code is. Another thing that I wanted to say was that we have all the sessions coming up. We have a session on setting up an environment to this because a session I provided you folks with a VM with everything set up. So we're going to have another one where we set it up from scratch and there I can show you if we have time how we can contribute to the project and perhaps how to debug things with GDB and things like those. And then we have another session of writing async code and kernel, which is basically a way for you to write linear code. You write the code linearly, right? But it gets to run on work cubes basically, right? And you never burn a thread when you pause. And that's it. That's all I had for today. Thank you. Thank you, Redsen. Yes, Redsen is presenting two more sessions like he talked about. We are I think planning to schedule them one in September, October timeframe, the two of them before the end of this year. Please look out for them on the side and then join for those sessions as well. Thank you, Redsen. This is great. Thanks. Thanks everyone. Happy to try to answer questions. Yes. Please reach out to Redsen with questions. I know there are lots of questions we couldn't get to. And part of the reason is they didn't feel like they were relevant to what we're doing. They are more generic to a status of Rust and such. So I made a call to not interrupt Redsen. That's on me. Thanks, everybody. Thanks. Thank you, everyone. Thank you, Redsen and Shua for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you are able to join us for future mentorship sessions. Have a wonderful day.