 OK, welcome everyone. I'm about to start. I thank you very much for being here. And what I want to talk about is, again, one of the experiences I made during my life hacking the kernel and being a Linux kernel maintainer. And this should not be a classical lecture, so I'm all the time open for comments, for criticism, or other opinions. But I gathered some facts I wanted to share with you. I also some impressions. And yeah, you will learn that, or you will see that I learned GNUplot the last day, so we will have some graphs. So I could fake my own statistics, finally. And yeah, I want to give some recommendations. I'd like to hear some recommendations or some ideas about a scaling problem. And let's start with the first graph already, which is not surprising, starting my statistics start usually from Linux kernels 3.0 until to 3.10. And the number of patches has significantly increased. We have to deal with a lot more patches which are entering the kernel, which is totally good. I'm totally happy that people, especially companies, are working way more with upstream than they used to. And this is what we long aimed for, but now it's about to take the next step, I think. So pretty easy. I don't think it's surprising to you. We have an increased number of patches. What I think is a bit more interesting, also so far not very surprising, we have also more contributors. The patches must come from someone after all. It's not only huge patch series, which we also see, but we have just increased number of patch authors. This is the red line. The green line, I all took the data from the Git repository. So you could check for yourself. I just grabbed for the committers tag. The number of committers, those people who actually take care that a patch enters some trees, so it will finally end up in the mainline kernel tree. Yeah, well, it has increased a little, but not at the same scale. And as you can also see, there's a huge gap already. I mean, the number of new contributors is nearly as much as the people of committers we have. We have roughly 200 new authors, and we have these days roughly 200 people who are in charge of committing patches to some trees. That's a huge gap. And I took the freedom to lay a spline through all these data points to see the slope, how it's increasing. The gap is getting bigger. And this is the scaling problem, the basic outline of the scaling problem. And I think if we want to maintain the high quality of Linux, we all adore, we have to pay attention. It's not like I don't want to create a picture like Titanic. Tomorrow there's an iceberg, and tomorrow we are all gone. But I think it really pays off to take this into account and take the next step and do some measures. So the next step would be what can we do to help maintainers dealing with that amount of patches and amount of new people. One thing to remember when dealing with those graph is that it's only based on accepted patches. That will also be true for later graphs. For every accepted patch, there's usually some additional work to be done. There are superseded patches which have been before, the final patches accepted. Maintainer has to deal with patches which are bogus. You have to reject them and sometimes in lengthy explanations to why you reject a patch. This is all not covered in these graphs. And especially with all these new authors coming, there's a lot of teaching going on. That is somehow inherent to the process, but it's still worth to be remembered that it's also a maintenance job these days. Teach the new authors, introduce them to the subsystem, and which is taking quite some time. And since the graphs started from version 3.0, don't assume that the situation at version 3.0 was ideal. It was not. It was already challenging at that time. And from that challenging situation, it got worse. And I expect to get even worse, or we can put it more of a challenge, how you ever want to put it. The question is, how can we support maintainers? Can we offload some of the work from them other than trying to attract new maintainers, which is of course the number one idea of solving that problem? And I remember an email post from Mikkelbos once said when there was a discussion on the Kernel Summit 2012 mailing list about the role of a maintainer. He mentioned these roles. I don't want to read them out. You can read on your own. A kernel maintainer is really a special set of jobs doing in one person, which is OK. But I think it's still worth remembering that it's this collection of tasks a maintainer is doing. And it's OK doing that collection. It's not easy to rip out one task and say, OK, this one person is the only responsible for being a pet reviewer, and this one person is being responsible for being the committer. So the number one rule is get more maintainers who like to do all these jobs at once. One way of supporting them with reviewing has been tried, and it's not bad at all. It's inventing the reviewed buys or tested buy tax, which are really, really helpful. And so I'll come to them later. First thing to notice about with all these qualities, it's a rare bunch of people who like doing all these things at once. But the good thing is, if you found such a person, they usually really enjoyed doing it. So it's worth supporting them. And really, if you're a developer who sometimes face difficulties with a maintainer, I'm not saying you should obey to him or her all the time, but keep in mind that there's a bunch of rules this person has to play. And this is not only the role of a developer like you. It's also sometimes the role of a software architect, which has to foresee things for the next 10 years. And with that role, you sometimes have to see other things as important than you. So I then wondered how much people would donate reviewed buys and tested buy tax, which are really helpful to a maintainer. Because it makes, for me, it makes a difference if it's just a patch. And I don't know if anybody is using it, if it's solving a generic problem or just a problem for this one person. So the new two lines are the number of people donating reviewed by or tested by tax without being a committer themselves. So people who just give reviewed and tested by tax. And you see, sadly, it's about the same range of people like committer. It's just about 200 people during the last kernel revisions donating these tax, which is a pity because a number of people think that every patch entering the kernel should have at least one of those tax. But we're far from it. So if you're a developer, so this might be another chance to help the situation. I also took the slope graphs here. Not too much surprising. There is some increase, but all in all, it doesn't match the increase of the number of new authors, which, as I said before, they have to be trained. They have to be educated. And there's still a gap. Those people are doing great work, but we need more of them. A simple system on chip, probably most of you know that already. It doesn't have even a fancy GPU or something. I just wanted to just bring back to your mind how many subsystems are involved, because if you want to have complete support for a complete SOC, you will have lots of subsystems involved, Adonet, UR, SPI, SDMMC, the processor itself, the flash, and whatnot. And I wanted to check how the different subsystems involved in supporting an SOC deal with the increased amount of patches. And yeah, in general, how they do. It's also interesting for me how I do the I-square-scheme subsystem maintainer. So I looked at the subsystem. I had the feeling, which is working very fast in applying patches. And that was the Adonet subsystem for the Adonet drivers. So what does this graph tell you? OK. It's an accumulative graph. So it says, where's my laser pointer here? It counts all commits from version 3.0 to 3.10, which was an amazing amount of over 5,000 commits. And 85% of all the patches accepted had been on for maximum 28 days on the list. So after patch has been posted, after 28 days, there was an 85% chance that it was accepted for weeks. And I think this graph is pretty much what most people want. They want it from a developer side of you and also from the maintainers. Usually, you want to push out the patch and get a prompt response. And I think the Adonet subsystem is very good at this if you look. Even after two weeks, you're at 80%. And after one week, you're somewhere around 70%. So prompt response, that ensures fluid development. Then I took another subsystem. I'm not bashing. I know all these maintainers are doing a great work. It just happens if they're paid to do this work, how many resources they have, what they have else to do. And there was a subsystem. I had the feeling, OK, so get a bit slow there. And that was the MTD subsystem. And this graph looks like this. So here, after 28 days, just a bit more than 50% of the patches had been applied. Of course, there might be the case that there are more complicated patches or not. But I think we can assume that there's always a set of trivial patches and a set of more complicated patches. And this is four weeks, eight weeks, 12 weeks. Let's say this is roughly a kernel cycle. So after one kernel cycle, we are at 82%. So there are nearly 20% of patches which need more than a complete kernel cycle to enter the MTD subsystem, according to the stats from 3.0 on. And this is, of course, not so good for fluid development. And I think this is really which needs improvement. Again, I'm not bashing the MTD mainchainers. They do what they can. But they also have to deal with quite an amount of patches. There's a question? Yeah. Net subsystem is also one of the subsystems where a lot of patches enter unreviewed with marks and a lot of reworks or reworks are needed after the fact that they entered mainline kernel. Because I understand the maintainer doesn't have the bandwidth to review all those patches, but he's applying them really, really fast without other people having the chance to review them. Yeah, that's not, it's true. I wanted to talk about it later, but I can do right now. Did everybody hear that, should I repeat? No complaints? OK. Um, interestingly, the driver section of the network subsystem is unmaintained. If you look in the maintainers file, it's marked as odd fixes. So just throwing the random fix in. There's no dedicated maintainer for it. And yeah, their approach to solve that is if there's a patch on the list and looks somewhat reasonable and there's no complaint, it will be applied. So this is one, maybe not for the canned subsystem. This is one way of pushing the pressure, which is usually on the maintainer, back to the developers. Because if you're in the network subsystem active and you know if you're not going to react on that patch, although you have doubts, you have to be fast. Other way it will accept it and you have to clean up and all not. This is one way to deal with it. I still haven't made up my mind if it's a proper solution for every subsystem. But yeah, that's one thing to be aware of, yeah. There are, in fact, some subsub systems in the networking like wireless or canned or in the USB stuff. They have all the sub-maintenors, so they're on this and then they proceed to their meta boosts in charge for the network. Exactly. And taking over a maintainer ship of a single driver is a good way to start gaining experience of maintaining. That would be a good start. So where am I? Surprisingly, I'm in the middle. Actually, I thought to be worse. But given that I do all that in my free time, I'm quite satisfied. Maybe some developers are not because after four weeks, I still have about 80% applied. But for me as a person working privately on that, I think this is a pretty good scale for somebody who is having a deadline because SOC needs to be pushed mainline. This might be not good enough. And there we have a bit of conflicting interests. Well, but that's how the situation is currently. And now, yeah. Does it mean that you're replying? Once again, please. Lower? Yeah. Then they say, hey, this can be wrong. You can look at your questions again. Yeah. I usually wait for a few days if somebody who's, I mean, I have more than 70 drivers for I2C marshals over different SOCs. I usually don't have the hardware anyhow. And most of the patches are dealing with hardware-specific issues. So I'll just wait if some user shows up and says, yes, no, perhaps. That's one thing. Another thing is I stopped monitoring the lists daily. I just can't afford that at the moment. I'd love to, because there are some patches that would really improve by replying immediately. But that would mean lots of context switches for me during the day, so I can't focus on other things. And the current situation for me, it doesn't allow that. So I glimpse through the mailing list every day if there's something urgent. And then usually I take one weekend or something like this and work in a bulk on patches. Urgent means if I need to apply it to the current Linux tree. So I have two branches for current, for the current HC series and for next, which goes into the next cycle. And urgent is stuff which needs to go into for current. So the patch description should make clear that we're dealing with a bug here and it needs to be fixed right now and also mention if it should go to stable, that I call urgent. Or there's another thing, if it's part of a cross subsystem cleanup or subsystem work where they need my egg, otherwise a whole bunch of work would stall. I call this also urgent. Does that answer your question? OK, I collected some more subsystems and this gets a little bit messy. The nice thing is that most subsystems, here's again the one release cycle border, most subsystems manage to get 95 or more percent of their patches into their trees within one release cycle. That's pretty good. Yet I wanted to focus. Oh, yeah, another question? This thing. I could imagine that after one merge window has closed and you've worked on the merge window, you still have some free time to prepare the next one already. That's my assumption. Another idea might be that they have a pull request at the beginning. That would be two weeks. They have a pull request at the beginning of the merge window and then they fix up with another pull request at the end. But still, also keep in mind that three months is a very loose release cycle. It could be something else. Coming back here, what I was interested a little bit more is that the quick response time. So I just concentrated on the first four weeks to have that more visible. And yeah, most subsystems manage to have half of the patches accepted within the first two weeks. I don't want to comment too much about it. It's just a matter of fact these days. I wonder a bit of the rule of some. Some people are telling new authors, well, if you didn't get a response, ping after a week, maybe we should increase it a little bit. Because we are still talking only about accepted patches. There's lots more about it. There's rejected patches. There's superseded patches. There's architectural decisions about the subsystems as a whole. There's a lot more to it. So the dream would be to be more like this without accepting junk. The fact is we are in general, we are here. And there are some subsystems having problems here. And my idea is the tendency is it's getting worse. Yeah, I said most of that already. That's only based on what I also found out, sadly, again, for the MTD subsystem. And the process patches are lost. I found some totally valid bug fixes which would cause oops if not fixed, which are simply overlooked. And that's also a sad story. But I couldn't find a way to measure that. It's really hard for me, at least, to scan mailing lists and make assumptions about what is related to what mail and what is a superseded patch and how has it been accepted. I couldn't find a measure. But no surprise, it's easily understandable that if you have overloaded, it's also easy to miss out patches, which can really fix bugs. In my weather forecast, situations that's not much of a change will get worse. So I expect either increased latency in the subsystems or questionable patches going in. We had that a little bit, especially with device trees. Device tree drivers, when main chainers were too overloaded and too surprised what they should do with all those device tree bindings, what bindings? What are bindings? And so some device tree bindings entered the kernel we now have problems with. So what can we do? I'm targeting a few groups here. The first one I called users, but not users of the kernel directly, but people who maintain BSPs, for example, do kernel updates for customers without really writing code themselves. I know these people are out there. And to those, I'd say it's really helpful if you give comments about patches, if you use them. Say, hey, we have this product and we're using this patch set because the driver is missing the feature. Or this patch is working great, but it still misses to catch this situation. All that kind of information is useful because for a maintainer, I think it's very difficult if you have a patch somebody wants to get into and there's no reaction on it. And I solely have to make the decision. Well, I'm trained to do this, but it really helps if other people are giving comments about that. And if you really tested that and it works fine, consider giving these tested by-tags to check other mails they're getting more and more public, how to use that. This increases the trust for me that this patch is worth applying and I don't have to dispatch so much, maybe not so much time reviewing. It's usefulness. And if you're using a patch over and over, if you're updating the kernel and still use a patch which seems valid to you and you still use it, please also consider resending it. It might have gotten lost. Check if old mail threads, if there are some comments which have not been addressed. Most of the time it's easy stuff like a formal style of patches which can be easily fixed and don't hesitate to resend them. So they're not lost anymore and can fix the things and people will know what they want of the drivers. That would be a good start. For developers, I'd like to keep the amount of superseded patches low. So one thing to achieve that is always give your best shot. There's nothing wrong if you don't know the subsystem and don't know how to do it and just try around. That's what the maintainer is also for. Maybe the list will have or maybe the maintainer. There's nothing wrong with not knowing things. I think there's something wrong with being sloppy. So with just, oh, well, that's good enough. I know the optimum solution would be something else but I'm just too late. It's too much to type or I don't know what. This is really annoying and then to convince people to do it properly sometimes costs more time than doing it properly right from the start. So my wish here is really to always give you a best shot always try to think yourself, okay, is this really the best solution for now? Yes, I know best is a subjective term and gain from the benefit is that it will make you a better developer after all. And if you know that your solution is not the optimal one, I'll be honest about it. I don't like if people try to sell me cheap things and praising it like it would be the best thing. There are, I do understand there are cases where the suboptimal solutions are good enough. If there's an ancient driver which is hardly used anymore and it's in a way for a cleanup, no, I won't force you to rewrite the driver, then the less intrusive solution will be good enough. That's what I'm aiming for in some old power PC driver to clean up my subsystem. I do understand that but as I said, be honest about it. I think I usually will understand but don't try to sell me something so I have to find out that there's something else going on. That just costs time. And of course, as you have the inside of the hardware and obviously work with the driver or with the core, I'd love if developers taking part in quality assurance. That means I'd be really happy if they could review other patches. I'd be really happy if they, before they send out their own patches, would do a review on their own patches as it would have been from someone else. That catches a lot of things. But also reviewing other people's codes is well, that's how the community works. You get to know other people, you get a better programmer, you learn about new techniques, it pays off. Take part in discussions, especially when it comes to subsystem architectural things because for me it's really important to know what people want of my subsystems, so I need feedback on that. It's really hard to do that quietly with no one stating opinions. And also don't be shy, take up and clean up and consolidation which is favored all among the kernel. It's not only part of the responsibility of the maintainers to do that. And if all that caught you on fire, try being a maintainer. I said, if it's for you and you feel challenged by this, get a driver or try being out a co-maintainer for some subsystem and check what's involved and if it's good for you, chances are that it really is. What can maintainers do? Do we have maintainers in the room here? Yay, hooray, welcome. That's good because, I mean, Kernel Summit is taking place so I expected to be all maintainers or a lot of them over there, probably. Happy to have you here. My first advice is, yeah, work hard or not. From my experience, most maintainers are quite busy already and I know it feels bad if you get lots of patches coming in and you want to do those people who are doing drivers and who finally took part in contributing to Linux to do them a favor and accept their patches, but don't let this be carried out on your back. There's nobody against anything if you have a burnout. And we have to face it, there are lots of vendors coming to the game and they release so many SSCs that I do wonder why so many are needed, but it's their decision. And as Linus Valley often says, they're constantly pressing the fast forward button and that's what I agree, but you don't have to accept it. If it's too much for one person, then it's too much for one person, dots. Then we have to find other ways of dealing with all the issues. Work harder is an argument you could always say, but I don't think it helps. Of course, if you're really lazy, then working harder might help, but I have yet to see a really, really lazy maintainer. I'm thinking again. And keep your amount of manual work to a minimum, like with every good craftsmanship, have your tools ready. There's no standard way of being a maintainer. I found out when I became one, there's no ready set of tools because it's about workflow and everyone has their own workflow. So you need to made up your own workflow, but do chat with other maintainers. What tools do they use? How do they solve a problem? This or that bet might be useful for you. And my top three recommendations, sometimes simple things, but made life a lot easier. First is keyboard shortcuts. I'm surprised maintainers typing signed off by or act by. I have two keyboard markers for that to make that short. I have even complete emails on a keyboard shortcut because issues are coming over and over again and I don't want to rephrase when teaching people all the time. That really helps already. Okay, of course, that's what most people use. Git hooks or whatever kind of hooks are pretty useful. Do some automatic testing before you apply patches to your trees. Do some testing before you send out or what. This can save you a lot of work and especially a lot of work which you otherwise would forget. That's obvious but still good. And I like patchwork, which is a web-based service which collects mailing lists. I will give you a short example. Due to the low resolution, it looks a bit... Not so nice. So it scans the iSquashy mailing list. Everything with a patch or pull request in it will be tracked. And so that prevents that something gets lost as long as my mailing list was on CC. And when I go to a patch directly, I have some data at the beginning and I have the complete mail thread and the patch at the end. So this doesn't depend where I am. I have all the information I need to review patches and I can set that status, acknowledge, accept it, reject it, supersede it. And this my iSquashy list is set up so that it will automatically notify users if I change the state. And what I really love is all these small details. There's a command line utility to get the patches from the database. So if we go back, okay, I can't really see. It has this magic number. And maybe, you haven't seen, but in the mail thread there were people who have added the reviewed by-tech. And now I have a command line client which I can get the patch from the net as in an inbox format so I could immediately apply it to Git. And now, see here, it collected all the text for me. This is awesome. I love that. Yeah, have your tools ready. Look out what's there. That can save you a bunch of time so you can concentrate on the real tasks. No, I haven't. I know that people are using it happily. I haven't felt the need to have it yet. It's not my tool, obviously, because I didn't feel a need for it, but I know people are using it. So I think it's worth checking out, though. I would need to find out it solves a problem I have. If I find out, okay, I have a problem with my current tools and somebody points me or I find out, hey, Garrett has a solution for that, I would use it. So far, this has not happened, which is not blaming on Garrett. I don't know, maybe it has a solution for a problem. I don't know, I have the problem yet, so. Can't tell. Yeah, I agree. But I think, I haven't really worked into it. I think patchwork might be the right approach. Yeah, I think we can build something on top of that. Although, I once in a while tried updating patchwork or adding features, but it's not my program. I can't find anything, I'm not able to change it, but the mailing list is pretty open for feature requests. I love that. Another thing, yeah, it's pretty similar. Optimize tasks, be aware of that. Be aware that, hey, I'm doing that again. I do this too often, can I optimize it? It usually pays off. And key thing for that is, of course, have your tools ready. Organizations, they can do something. I'm talking here about companies like chip vendors or organizations like Linauro, Linux Foundation, or whatever. I think they also have a role to play here. First, if they have developers, it'd be great if they allow them to take part in quality assurance during their work time. Not only to be concentrated on their driver for their deadlines, but also give them some free time to actually work on generic things that will make them, that will improve their skills. And I think there are enough people who actually enjoy doing this if they were allowed to. I know some who are afraid of doing that because they don't know if the manager would accept that, but from their point of view, they would love to do that and I think it would be good to just let them. It will be a win-win situation even, kernel gains, developer gains, company gains. I will be also more strict about internal education. I mean, it's sure, I'm responsible for my subsystem and if there's a developer not knowing my subsystem, I will teach the things needed. But I will be more strict about teaching basic kernel submission guidelines. If it's not for private people, if there's somebody just trying to do something, okay, I'll be there. But if I know there's one or more people from a company who know all this stuff, I will be more aggressively pointing these people to get this education in-house. Because I think that's too much of a burden for a maintainer. And it would be nice if organization would, some do. It's not they are all bad to do this more in-house. I will pay attention to that. Speaking of the fast forward button again, realize that we all understand that this would be a nice thing to have drivers in as soon as possible, but given the current workload and overload, things might take time. I think a number of companies are getting to understand that, but of course the best situation where I would be increasing the manpower or the bandwidth so we get best of both worlds. And what I would really, really love is that maintainer, being a maintainer should be a job on its own. I know a number of people who are doing this, like I do in my free time, there are people who are doing this while kind of aside to their regular job, and only very, very few people are directly paid to be a maintainer. And taking into account the importance of this job and what it does to the Linux ecosystem, I think it's great to have that as a job on its own. So if there's a talented person who is really good at maintaining and playing a key role, I think it would be nice to really hire that person as a maintainer. It would be great if that could be done by an independent organization, like in the Nauro Linux Foundation, like as it's done with Linux Torvalds and Greco-Hartner. Those are good examples. So you're out of the question, ah, okay, this maintainer aside by that company, that sounds political. But on the other hand, we're developers and I trust most people who are maintainers to have enough credibility that they will take care of their subsystem first. In general, I think it's the item to improve the situation, to make maintainer a full-time job. Not really like putting it on your web page. We're looking for a maintainer for the MTD subsystem. Of course, you should approach people who are able to do this, who have enough merit gathered to do this. But it could be a way to ask them, okay, we're funding the work if you look after the subsystem. I try to come up, to update the graph a little bit from before. It's a bit vague. What that means is that from what I know, which is limited, green fields are the parts where I think people are more or less paid to do maintaining that part. There is some, can you see it? No, not very well. This is something between yellow and green. So they're asked to look after that part in the Linux kernel during their work time. So it's not fully directly paid for it, but somehow expected. Yellow means people or subsystems where the maintainers are, whether the employees, employers are okay that they do a certain set aside their regular work on the subsystem. And red means all the maintaining is done unpaid as a freelance effort. That doesn't have to be directly related to the latencies we saw before because here Ethernet is red and it's still not the worst subsystem of all. But I think if we look at that, there should be a lot more yellow and green. They're here flash the MTD subsystem. That's a crucial part for lots of boards and devices out there. And I think it's quite risky to have that red, like with some other subsystems. I mean, Watchdog. That should be also a well maintained subsystem. No complaint again against the maintainer. He's doing a hell of a job in what he can, but his resources are limited. So that's what I'm going to say. We need more manpower, more bandwidth in that. Part is to recue more people. Part is to get maintaining as a job on its own where people are allowed to concentrate on that. That are my conclusions, assumptions, and we have two or three minutes for questions. Thank you for your attention so far and let's go for it. I think it's the next step and I think we'll be able to do it. Thank you. Oh, yes. Yes. You have a big driver and want to get it upstream as soon as possible. Why is it so tough? Try to get the reviews on your own. Maybe find someone else who's experienced in that subsystem and ask that person if he or she is willing to do a review. Try to get users who say, okay, here's my driver, please test it. So what I said about these tags act by reviewed, tested, by tried to get people motivated to donate them to your driver so that the maintainer has an idea, okay, people are using their cells, at least some people having a glimpse on that already, that raises a level of trust. And if it's really urgent, you can hire people to do that. It makes sense to create any standard testing modules just to decrease the number of bugs doing bio-review testing and standard testing. If this will be thought about, even testing, I mean better, test to decrease the number of bugs. Two questions, actually, which I hear from that. The first is how many patches are dealing with bugs in drivers? Yeah. And this is pretty hard to find out, actually. I mean, there's the patch count statistics on LWN and people are paying attention to it somehow. They're criticizing it somehow because the sheer amount doesn't tell much. It could be a large rewrite series. And but it's still, it's commonly accepted that finding out if a patch is useful is incredibly hard to determine. Of course, everyone wants to have a graph, so who did the most useful patches? But yeah, so you're saying if we did more testing, we would have to deal less with bugs. What we were talking about the day before was like that, well, every subsystem should have a test module so that you can, if you wrote a driver, you load this module on top and then it tests some basic set of functionalities somehow. Yeah, for example, for I-square-C, you would definitely have a hardware setup you had to. You would have to have some certain slaves with a special configuration to figure things out. Yeah, yeah, that's what, I think it's a thing worth considering. I'd need to see, I can only speak for I-square-C what a good setup would be and what I would need to expect from people. I put it on my to-do list. That's good. So how many bug-fix patches do you have, has it? Yeah, exactly. Yeah, that's the architectural thing of a maintainer, right? You need a setup. Committers, that's the difference. As a rule of thumb, I'd say up to driver level. You can be a maintainer for a driver. If it's a huge, large driver, I think network drivers can be really huge. Maybe that can be split up as well into subtypes. But as a rule of thumb, in I-square-C, I'd be happy if there's somebody saying, okay, I will take care of that driver. That's okay, and that's absolutely worth an entries in a maintainer's file, which is important because then the script getMaintener will put you on CC and I know, okay, I won't have to deal with that patch. This is another for small subsystems. It might pay off to be a co-maintener, like Watchdog, for example. It's not rocket science. It just needs, it has to be done. Yeah. After you gain some trust. I'm not too hungry. I will answer more questions, but if you're all hungry, I'll let you go. Yeah, that's it, and thank you very much.