 All right. Well, good morning, everybody. I guess it's about time to get started. If you've had your coffee My name is John. I'm here to talk about the criminal. This is the first In-person talk I've done in almost three years, which if people know what my schedule used to look like is a pretty strange thing I can't imagine a better place to start, but it's also the first version. I give in this talk in general And then over a year. So a little bit has happened since I last did this Right the last time I did a talk like this was in August of last year. So we've merged a good 84,000 patches since then coming from nearly 5,000 developers We've added about 3.6 million lines of code to the kernel much of which is AMD graphics boilerplate, but there's other real stuff as well We put out six kernel releases with another one coming at the beginning of October. So The point that I would make here is that There's really no way I can cover all of that So if you're looking for a comprehensive summary of what's going on You're gonna be disappointed. In fact, I haven't really tried to do that for a while So what I intend to do is to focus on a rather smaller number of what I think of as Transformational changes things that are fundamentally changing how we will either make use of or develop the kernel in the kernel community so That's that's my theme, but let's get a few of the details over with first This is what we've done over the last year in the kernel development community As you can see there have been six kernel releases coming out on the usual nine or ten week release cadence You can almost set your clock by the kernel release schedule these days Each one of these is a major release somewhere between 12 and 15,000 commits added to it And each one involving the work of about 2000 developers Now this is really pretty normal for the kernel these days The only thing that I might point out is that the nearly 2100 developers contributing to 5.19 were in fact the highest we've ever seen So we do continue to to grow the kernel community We have an ability to bring people in that I think a lot of projects would really love to be able to match There's there's a lot of people who are out there Wanting to work on the kernel for one reason or another and it helps to keep the community healthy So this of course is the mainline kernel as released by Linus Torvalds But none of us run mainline kernels these days a very few of us do only the more adventures among us do that What we run or what are called the stable updates or something derived from that usually by our distributor So this is the current status of the stable updates There are six of them being maintained now they've been kept around for a period of up to six years 5.19 is also being maintained, but that will last only for about another month So the thing that I would point out here is that these kernels have seen a lot of change a lot of stuff going into them 4.14 has at this point and this by the way is a little bit old But almost current has received over 24,000 changes since the allegedly stable 4.14 kernel release was made That is over a full development cycles worth of changes in fact getting closer to two development cycles full of changes So there's clearly a lot of stuff left that we still have to fix After we put out a nominally stable kernel release it takes a long time to really find all the bugs and get them fixed in fact I was Kind of curious about just how long it takes so Wanted to do a little bit of an inquiry into where these bugs are coming from or actually Something that's a little bit easier to answer which is when are these bugs coming from how old are they and the nice thing is that? Kernel developers helped me in this in this project This is a typical commit message. Just it's just a bug fix that I found in the 5.19 series There's nothing particularly special about it just one of many But one of the things that developers do when they're putting in a bug fix besides describing the bug and what's being fixed and all that Is they add this fixes line? That says the indicates which commit introduced the bug in the first place This is useful for the stable kernel maintainers to know how far the back ported and such it lets you know How old this bug was how long it has been in the kernel now in this particular case the developer also Helpfully put that it was introduced in 4.14 But they don't have to do that because the the fixes line alone is sufficient to tell us how old this bug was So I looked at 5.19 and all the things that it fixed and saw where the bugs came from and got a nice Allegable plot that looks like that But if you zoom in on that you start to see a few things up there the top line is 5.18 So the 5.19 kernel fixed 268 bugs that were introduced in 5.18 So that many that were carried over from the previous kernel release it also by the way I didn't indicate it here, but fixed something like 700 bugs that were introduced in 5.19 bugs that were never actually released in in a stable kernel So you can see that but remember that other bug I was looking at came from 4.14, which was there in fact 17 bugs from 4.14, which was released five years ago almost exactly five years ago We're fixed in the 5.19 kernel, which was released last month So these bugs hang around for a long time. In fact, you can see this long tail of bugs If you go to the other end of the plot you see this in fact a quite long tail of bugs And we are still fixing bugs that were introduced in the two six kernels In fact, if you look at the kernel release history since we went over to get the entire known history There are only three releases that did not introduce bugs that were fixed in 5.19 So these bugs hang around for a while There's there's an interesting singularity there at two six twelve and one might think that two six twelve was a really terrible release They had put all those bugs in but two six twelve, of course is the beginning of the get era That's when we started using the get source code management system So those bugs are actually bugs that were really Introduced in the kernel any time from the beginning of Linux through to two six twelve in 2005 So that's why there's a whole bunch of them that pile up there But it is an indication that some of our bugs are indeed very old So a summary of this is that we're gonna be fixing bugs for an awfully long time. We Are hopefully introducing them faster than or fixing them faster than we are introducing them I intend to do some analysis to try to figure out if that's really true, but in any case even at that rate Bugs stick around for a long time. We have a lot of bug old bugs in our kernel So moving on to some of the big changes that I wanted to talk about I have a handful of them here The one I want to start with has been certainly a big topic at the Linux plumbers conference and other such Which is the Rust programming language Which is with any luck coming to the kernel sometime soon and people might ask why are we going to? Try to introduce a new programming language for kernel development And why rust in particular and the answer to that comes down to a few things But one thing to start with is all those bugs that we were just talking about The rust language is designed to make it not a lot harder to introduce many types of bugs Right it allows you to find types that can enforce all sorts of rules such as locking rules in The kernel to access a shared resource you have to take a lock typically later on you have to release the lock But there is really very little that can enforce that in the C programming language So we have bugs where data structures are accessed in racy ways We have bugs where somebody fails to drop a lock in the right place That sort of thing rust can enforce most of this at compile time. So those sorts of bugs just vanish The same is true with for example memory allocation memory safety that sort of stuff The C language is plagued by this concept of undefined behavior Places where the standard says if you do this Then anything can happen and the compiler is free to just crash your program or go off and drink all your beer or whatever That sort of thing the problem is that undefined behavior is almost impossible to avoid in an actual real-world C program It happens all the time and then strange things can happen rust is intended to to eliminate that to have behavior be defined in all situations and Get rid of this whole big bear trap that lives in the C language that continues to bother us the other thing is that While we have a lot of accomplished C programmers in the kernel community a lot of the developers who are coming into the world now Are less than thrilled about working in C. It's it's not You know it's something their grandparents used And they would rather have a language that helps them get programs right from the beginning that sort of thing so A lot of those developers are currently perhaps deterred from getting into the kernel community for a number of reasons C being one of them I've already seen a fair amount of interest from people who's saying if I can work in rust in the kernel Then I would like to do that. I think that the brain language like rust is a key to bringing in a new generation of developers Just hop I'll come back to a bit later on and I think that's that's really important So given all this one might wonder what what's the hold up? This has been worked on for a few years Why isn't it there already and the answer is comes in a few forms One of which is that rust is not the easiest language to learn Either if you're learning it at the beginning or if you're learning it coming from a C background It doesn't really look much like C So I just went into the kernel crate that's being proposed there and picked out a simple function This is a function that will panic the kernel and if you know rust you can read this and you know what all of the stuff Does if you can't well, I just saw a comment from a kernel developer this morning saying that rust is programming by smiley Programming by emoji It really can look that way to a lot of people if you don't understand what the lifetime markers and all that stuff actually look like So it's all stuff you can learn, but it is new it is different And this is a problem for the thousands of people who work in the kernel community now Who know see very well who know the code very well who are gonna have to learn this If rust comes in the kernel even if they themselves do not intend to write any code in rust Because they have to be able to maintain the code that is submitted by others They have to decide what goes in they have to be able to fix it So this is gonna impose a huge learning burden on a large community of developers And that naturally is going to create a certain amount of resistance and there are people who are not pleased with this idea at all that the people promoting rust in the kernel are Signing up for a whole lot of hand-holding and developer support and the good news is they seem to understand this But it's it's going to be an interesting process to get people up to speed on this Another problem with rust is that the language is still evolving See is pretty static these days rust is still changing from one run released to the next There are a whole lot of features that have not been stabilized and the kernel needs all of them so So we don't really have a stable version of the compiler to work with and we're having to use Features of nightly releases and all that and that makes people nervous when you're talking about building a production kernel with with this This kind of thing we've gone out of our way to Ensure that the kernel can be built with a whole variety of C compilers of varying ages so that anybody can do it With rust you're gonna have to use the current development version of the compiler and that that makes people nervous and again with good reason And finally there are some things that are simply hard to do in the rust language Due to the way it works Classic examples the doubly linked lists that the the C uses heavily throughout Due to the way it rust works You really cannot easily make a doubly linked list because you're trying to do ownership in both directions and rust wants one Owner of each data structure and so on it gets very complicated very quickly There was a whole discussion at the rust conference last week about Pinning and how you simply initialize a self-rential data structure of this so that it doesn't get wrecked when the rust Compiler moves it around and so on so there's some hard problems to be solved because some things that we do are just hard to do and Then I would point out that kernel developers can be fairly conservative sorts of folks at times Again with good reason if you make a mistake in the kernel You can create problems for incredible numbers of users and these problems can only show up years down the line when that kernel Shows up in some enterprise Distribution so kernel developers have learned to be very careful, but it this works both ways and it also can lead to Resistance to the bringing in of new technologies that we really need to have we're certainly running into that This is a quote from a kernel developer whose name I'm withholding for for their own protection from a few months ago this developer was Simply flat of insulted by the idea that his skills were not up to the task of writing safe code in C And that we needed to use a language that made writing safe code Easier, right? You run into this sort of thing and in any community and certainly in the Keener kernel community And this is going to take some work to overcome. We're doing it. I think we're going to get there and I think that It's pretty clear that the rust support will be merged on an experimental basis It could happen as soon as 6.1. I think it's going to take just a little bit longer than that It will be a subject of discussion at the maintainer summit tomorrow And we might have a little bit more inside then But it's coming soon if it doesn't come at 6.1 It will probably come sometime early in 2023 and then the show will begin and we will see how well it really works for developing of kernel code So next thing I want to talk about how many of you know what I owe you ring is at this point so Probably less than half of the audience I Know you ring takes a little bit of explanation. This is a new API in the kernel. Now. Let's start by talking about the traditional Linux or Unix system call API if you want to read data into a buffer you call read You can pass it an open file descriptor a buffer in length And the kernels job is to fill that buffer with that many bytes of data from that file descriptor pretty simple But this the system call has some limitations that have bugged people for many many years is single threaded in the synchronous you call read Nothing happens until that read has completed everything stops in that thread There's really nothing else it can go on which doesn't work well in the modern world of large numbers of processors And you're trying to do a lot of things in parallel and that sort of thing Read is also simply a system call one thing that Unix developers have learned Many years ago is that system calls will slow your program down and one of the best ways towards better performance Is the avoidance of system calls? Linux system calls are faster than most others, but they still are impediment. You've got a context Which there's stuff that has to happen So you want to avoid system calls whenever you can if you do a lot of reads you have to do a lot of system calls So what's another way that you could do this? If you use IOU ring you end up setting up a shared memory area This memory is shared between the kernel and user space both sides can access it directly as memory And it's organized as a circular buffer So you put things at one end in at one end pull things out the other in this case User space is inserting commands where commands are something like read this many bytes from that file And then the kernel is consuming them at the other end of the circular buffer and executing them So these two things are happening in parallel asynchronously from each other User space can put a whole bunch of commands into this buffer without waiting for any of them So you can stack up a lot of stuff to do The results then come back in what's called the completion queue which is another shared memory area again Organizes a circular buffer, but this time the kernel is the producer in that buffer writing results into the buffer and User space is consuming those results out to find out what happened to the request that it put into the submission queue So the results of course can show up in a different order Then they were put into the submission queue because it's all asynchronous everything is completed when it's completed so This brings some real advantages that people have been looking for it allows a process to do operations Asynchronously we've had some support for that since the the 2.5 days really, but it has never worked all that well It's only supported certain use cases This is generally synchronous IO that can work with pretty much any IO device out there and files and so on So we finally have a true asynchronous interface for the kernel But the other thing is that as long as those buffers do not empty out User space can continue submitting operations and the kernel can continue Putting out results with no system calls happening at all It just each side just checks the memory buffer deals with whatever has been put in there and executes it from there So no system calls at all. This has resulted in some pretty amazing IO speed benchmarks If you're trying to do that sort of thing Has enabled a level IO performance that we've never really had before And so this has a lot of people excited because if you're trying to write You know a busy network server or a whole lot of other things that do a lot of IO This really helps a lot But there's actually more to it than just the synchronous nature of it one of the things that's worth mentioning is what's called registered files and buffers and To explain that we need to go back to that system call Right the read system call has to do the work of copying the data into the buffer But there's actually a whole lot more that has to be done here The kernel has to look at that file descriptors Verify that it's a valid file descriptor that the process is able to read from it And then it has to lock down the file descriptor so it does not get closed while the read operations taking place Then it has to examine the buffer and make sure this is actually valid buffer in that process's memory area If that buffer is not actually resident and RAM it has to page it all in It has to lock it all down so it will stay there and then finally it can put some data into that buffer For many operations if for example the data being read is already cached in the kernel Which may well be the case for file IO then this setup overhead is the the biggest cost of the whole operation by far a Registered file allows the kernel to do that setup work for the file descriptor and lock it down once And similarly for a registered buffer you register both with the IO u-ring subsystem It remembers them and then you can do operations on both of those without incurring that overhead going forward So that takes out much of the overhead of doing IO and again helps to increase IO speed significantly There's a whole wide range of commands that you can do not just things like read and write you can open files and accept Network connections and send messages and there's work Going on to add things like creating a new process and other sorts of things that are not really IO related at all a whole lot of I Think in the end we're gonna end up being able to do almost anything you can do with a system call by way of IO u-ring as well So you can fill up this thing with a lot of operations these operations can be chained you can put them in series So if you put in say and open a bunch of reads and then a bunch of writes say to a different file descriptor and a close All of that stuff can be put in the buffer It can all be changed so that each operation begins automatically when the previous one successfully completes and the kernel Handles all that user space just has to put the whole series in and forget about it. So it's done So it allows you to do some pretty complex things by way of the ring. It is in a sense becoming a separate API to System calls in the kernel that allows you to load Some sort of simple kind of programming to the kernel and execute it all asynchronously and just get the result back When it's all done. It's a very different approach to programming on Unix type systems And people are starting to run with it and do interesting things with it One of those is a thing called you block a user space block driver You know block drivers are normally block drivers being drivers for disk drives or similar devices aren't normally entirely within the kernel This allows you to move it into user space and the communications with the kernel happen again through an IO u-ring buffer set So I wrote about this they've they've done some things like a loopback block device and the network block device And a reporting performance results that are actually better than you get with the in kernel devices Which is a very interesting result So I think this is interesting for a number of reasons Everybody is said for you not everybody but certainly people have said for years that micro kernels are the architecture of the future They're more robust and more secure that sort of thing. There have been a lot of reasons why this have never taken off One of which being that the communications overhead between the various components of a microkernel really kills the performance of the system IO u-ring seems to have found a way to eliminate that problem We now have a very high bend with communication path that has almost no overhead at all So we may be seeing an era where things like block drivers and other things will move out of the kernel in the user space Processes because they can be made more robustly perhaps in an unprivileged mode and Without the performance cost that you used to see you that sort of thing And this is part of a bigger trend that I've been observing for a little bit This is a picture that I ripped off from Red Hat's website because it was easy to find of the traditional idea of how Unix systems were where you got one very well-defined box Which is user space that has processes in it and another well-defined box Which is the kernel space and this very narrow little pipe that is the system called interface between them And that's really the only way to communicate. I Would put forth the idea that this this model is going away that that boundary between the kernel and user space is Becoming increasingly porous and that we're seeing strange things being done on both sides So examples would include iou ring which it is talked about where you can stick programs ioprograms into the kernel and move the Block drivers out of it. I haven't even talked about BPF, which is another fundamental thing But everybody's heard all about BPF. So I left that out of this talk Except for one thing I wanted to point out Which is if you think that BPF is a cool technology, but you're not really using it I thought so and then I asked my computer how many BPF programs were actually loaded and running on it and found 17 of them Mostly put there by system D, but there are other users as well BPF is here it's taking over an awful lot of things and will continue to do so and again, it's Making holes in that boundary because now you're loading real programs into the kernel and running them in the kernel context in a way that is Hopefully safe and able to do interesting work and increase the flexibility of the system as a whole The system out there called Daemon and associated thing called Daemos This is a mechanism that allows A lot of memory management decisions such as which pages you push out to to swap which ones you keep to be pushed out to Under user space control. There's a whole lot of mechanisms that have been added there You can search those out if you're interested in them user fault fd pushes that even further of course by allowing By referring page fault handling in general to a user space process in certain cases Sec comp does a similar thing with security decisions where you can now defer decisions on security policy to a user space process something that was previously entirely the the province of the kernel xdp is a networking subsystem that allows the handling of a lot of Network protocols and such in user space while still taking advantage of Of kernel drivers that sort of thing The people who are really looking for high performance with networking have gone over to using xdp It's done again with bpf of course and that sort of stuff So the thing that I would point out here is that In my mind linux systems are going to look an awful lot different in the future As all of these technologies really set in and take hold and the way we program our systems changes And the system call interface is not going away But I think a lot of the code that we use a lot of the code that we work on is going to Only use those interfaces to gain access to these other ways of working with the kernel which offer more flexibility and better performance and The ways to to make the kernel really work the way you want it to do And that's going to lead I wouldn't say the fragmentation but it is going to lead to a world Where the kernels that we are running are not all the same anymore Because depending on which bpf programs you've loaded into your system Your kernel is going to behave very differently than the kernel on some other system out there So it's a more flexible world and that's going to have I think both ups and downs But mostly ups. That's why we're pushing in that direction So the last thing that I was going to talk about is a subject that I Refer to as generational change talking about the development community now This is a picture that I took in 2001 at the very first kernel development summit So there are some familiar faces there if you go out you will find some of these people wandering the halls If you go over to to the links plumber's conference, you'll find an awful lot more of them A lot of these people are still around Some of them I think are still wearing the same clothes But a lot of these people are The people who are our top level maintainers who are in a lot of ways in charge of how our development community works And what we do so there are some very good things about this This development community Represents a really unparalleled depth of skills and experience We have people who have been working on the kernel for 30 years And know it from one end to the other. They know not only how the kernel works, but they know how our development process works They know how things are done and just as importantly why they are done that way I think there are very few software projects out there That have had this many people stick with them for this long If you look at most projects within companies people come people go And the people who are working on it now may have very little to do with the people who started it years ago Even if the software itself is as old as the kernel is So this this is a really good thing. This is not something that we want To to lose anytime soon But at the same time It it brings some some problems with it including A resistance that changed I already alluded to when talking about bringing rust into the kernel We have people who have been working on the kernel for 30 years in c And don't necessarily see a reason why they should do it any other way. It has worked very well for them until now And so they resist putting things in And you will find that in a lot of of areas whenever somebody tries to bring new technologies into the kernel We often run into that sort of resistance We also run into it in the development process itself The kernel project famously still uses email as its primary communication and And management mechanism. There are a lot of good reasons for it. I really still don't think that You know the the modern web based forage systems and all that We really scale very well to project the size of the kernel Where you have thousands of developers and all this stuff going on at once. It's very hard but To a great extent we're not really even trying All right, again email has worked very well for all of us You know, we figured out how to configure our email system to work well for us back in the 1990s sometimes And I haven't really changed it much since and don't really want to change it now And so there's there's not a whole lot of impetus to change to a new way Of development and again, it works for the existing community It tends to be an impediment for For new developers who have to figure out how to set up their email To actually deal with the sort of volume that sort of thing Have to figure out that they shouldn't actually subscribe to linux kernel because they will really regret it Have to figure out how to send a patch out of their corporate email system and not have it be corrupted On the way out all this sort of thing you see people struggling with this sort of thing I believe it turns away a lot of our developers, especially those who would Fix one thing and then move on which are important community to have as well A lot of these people just don't want to deal with it The other thing I would point out if you look at This picture This is not the most diverse crowd that you've ever seen If you take a picture now it is a little bit better, but not that much better Really not that much better at all. It hasn't improved that much In in 30 years And sometimes I worry That we have kind of given up trying That the things that we have tried don't seem to have worked That sort of thing We really need to I think turn over some of our community because we're leaving a lot of talent on the table We're missing out on the contributions an awful lot of people could be bringing By having a a community that is this This Undiverse so I would really like to see that that fixed I would really like to see a new generation of developers come in that can Can change that situation and make our development community look more like our user community and and the world as a whole I think we need that to be successful going forward and to last for another 30 years And finally the kernel maintainers are increasingly An increasingly tired and increasingly grumpy set of single points of failure You have people who are joint choke points from various subsystems Um, they didn't generally want to be that way. I mean I am arguably one of them. There are many of them out there But They're the only people who are maintaining a particular subsystem So you have to go through them they tend to be overwhelmed partly because While companies are happy to employ kernel developers They tend to be a little bit less thrilled about paying for kernel maintainers and so people are Often trying to squeeze Maintainership duties in to spare corners of their time or on their free time and so on and so they get tired Um, they drop out at times. It's a bit of a problem It's a bit of a vulnerability in our development process that I would really like to see address somehow So, um, what can we do? How can we change because this is going to change because these Developers who've been working on the kernel for 30 years are eventually going to find something else to do one way or another Right, we are going to go through a change here That may not be this year next year, but it's coming So one of the key ones is is getting away from this single maintainer model that that we still have throughout much of the kernel I developed I know once described the kernels being hundreds of independent little five thumbs and it tends to be that way But it doesn't have to be that way There are some subsystems that have gone to To groups of maintainers that can all handle the maintenance duties Those subsystems tend to be a lot happier and less grumpy in general we're making progress there and It really showed us value last year When one of our most senior and most important kernel maintainers had a health crisis and had to drop out for for quite a long time And maintenance of that subsystem one of the busiest subsystems in the kernel continued on To at a level that almost nobody even noticed the outage Which was really pretty amazing a few years ago. It would not have been that way It would have been a serious problem We need more of that because these things will happen Documentation of course I can't do a talk without yelling at people about documentation because it always kind of falls by the wayside but the developers in our community Containing their heads a lot of what people are calling tribal knowledge of how things work and why they work this way And why we've done things when they go away that knowledge is going to go with them We really need to set down an awful lot of more of it into our documentation So the people who come come in can pick it up and not have to learn these lessons the hard way Which you really see now you see a lot of this with people who just submit patches that Go against the way something is done in some subsystem and they don't understand why and they have to be corrected We lose a lot of energy that way now in the future if we lose the knowledge to review these patches and correct people Then we're going to have worse problems. So we need to increase our Energy put into documentation. This is another thing that companies tend not to want to pay for And so it doesn't happen and it's it's a problem Throughout the the open source ecosystem and very much a problem in the kernel And finally, I believe the kernel community has long under invested in its development tools This is the community after all that did not use the source code management system for its first 10 years of operation Um, it says something there, but it's also the community that once it did decide to Use the source code management system created one that transformed how everybody developed software The scale on which we do things tends to Mean we put unique demands on tools and if those can be solved they solve a lot of problems for a lot of people So the point I would make is the investment in tools has always really paid off very well for the kernel community We don't do enough of it We've seen some of it recently especially coming out of the linux foundation In the addition of the lore archive and tools like v4 Once again, I think you've repaid the energy that went into them over course of months and most We need to do more of that. We need better tools For our whole development process so that Developers coming in will have an easier time of it and can come up to speed and stay up to speed Much more quickly And then finally just to close we need a new generation developers people who can shape the kernel's next generation because Things are going to change there. We knew need to keep bringing people in Maybe some of those new developers or some of you folks out there I would encourage you to to join our community and be a part of it is a fun and exciting and interesting place And with that I am done. It looks like I have a couple of minutes for questions If anybody has any Is there any sort of how to on doing that making documentation because I don't remember seeing when I knew it took me forever to learn how to read it All right, um, steven asks is is there some sort of a how to in the documentation about how to use email with the kernel community And we do have one document called email clients I believe that it's really focused on how to make your email clients and patches without destroying them Which is an important thing to do we refer people to that often because You have to do it and Some people reading there that with their particular client is really hopeless. You know if you're looking at look You know do something else. Um, that sort of thing um The bigger task of setting up a proper email environment so that you can get some sorts of email from the kernel community Without getting the entire linux kernel firehose And cope with it and do that there's there's nothing there for that And people just have to kind of experiment with that some of the tools I just mentioned include Ways of setting up a more web like interface to the To the linux kernel archive so that you can subscribe to particular discussions and that sort of thing Make it look a little bit more forum like for people who want that Um, I've written about that some on lwn. That's still in an early stage, but I think that will help some people as well Yes Will we have to use a development grade rust compiler for the first experiments? And it depends on what you mean by development grade the answer is yes The the plan is I understand is that you'll have to use the current release Of the rust compiler, but you will have to invoke the magic macro that turns on the nightly features that are not Actually stabilized and are not normally available in the current rust compiler So you will have to do that for a while because I mean the number of Unstable features that the kernel acquires is fairly large and it's going to be a while before they are all actually made into official rust stable features Okay way in the back you're gonna have to ask loud I'm having a hard time hearing you. Sorry. Um if you could Come up here if we have a mic somewhere. I don't Do we have a mic somewhere? It looks like there's a mic there. All right Just in one two three So the mbd user space block driver Is not just slightly faster probably probably about the same speed or slightly faster than the kernel device But it also has many more features now. I mean it only took about a week to write. So it's actually a You block is actually a lot more amazing than I think you even said in your slides. It's a it's a quite a game changer Yeah, I agree. I think that you block and other interfaces like you block For other sorts of subsystems Are indeed fundamental changes and as you say Once you've moved it out of the kernel, you can write it quickly. You can add features to it You can do things like that. It's it's going to speed things up Development wise as well as perhaps performance wise. I think the development wise part is the more important part All right, so we have a couple of online questions. I I can't see them. Um, yeah, I can I can tell Let's hear them When do you expect us to see drivers written in rust in a kernel release? When will we expect to see drivers written in rust in a kernel release? I don't know the when rust itself is merged It comes with a couple of drivers. There's an mvme driver That was written that is part of the rust for linux patch set Uh, it were it supplements the existing mvme driver, but in fact As the developer was just saying a couple of days ago and at the rust conference performs Just about as well as a very highly in tuned in kernel driver So we'll have those When we will have drivers for devices that are not currently supported By an existing c driver that I don't know I think it may be a while before people are willing to trust that rust is going to stay around and commit to having a driver in that mode But I could be wrong on that. We'll see Any view on when we when The next lts kernel comes out When will the next lts kernel come out? That is by convention at this point the final stable kernel release of the year So that will almost certainly be 6.1 That's an easy one All right. Well, I'm out of time. I think I'm done. I should get out of the way and let torsten set up for his talk And I thank you all very much for your attention