 Okay, fine. Okay, welcome to this second session in here by Keith Packard. This one on, as you see up on the title, a slide changing the ex-server development process for fun and profit. Sure. Yeah, so give Keith a big hand, a big welcome. Thanks. Thanks for dragging yourselves back from lunch early. Okay, this is a talk I've been wanting to put together for quite a long time. I think it's about time to try to give it, try to, we have enough data collected now that I think it's a reasonable time to take a review of what we've done in the last three or four years with the ex-server development process. We've gone through a pretty radical transformation trying to get to a more reliable release process and a more, a more, a better quality of code being put into the ex-server. There were some people at the beginning of the change who decried the change in the process. There were people at the beginning of the change who were very excited about it. And I want to see if the fears were realized and if the hopes were attained. So back when the ex, back when, back in the X3D6 days and in the early Xorg days, we had an old, the old, old development model, we'll call it, where all of the people with commit access would just commit directly to master. And at some point the release manager would say, please, please only commit stuff that you really want to have in the release. And then at some point the release manager would say, okay, I'm just going to cut the release today and they would ship the top of master and tag it. This obviously required a lot of cooperation and collaboration. This worked great in the X3D6 days when only four people were allowed to put code to the ex-server at all. When we changed the ex-org environment and lots of people could contribute to the ex-server, and we had a lot of people with direct commit access to the repository, there were some scalability issues. Oftentimes, inappropriate patches would slip in at the last minute and we had some brown bag releases. But it was obviously a fairly low overhead from the individual maintainer's perspective. They just committed what they wanted and it got shipped in the next release. And then we decided this is a little too much like anarchy and we want a little more control. The same process was followed for a long time during the integration phase of the release where everybody would commit directly to master. And at some point the release manager would tag a release branch. And people would continue to commit to master and then list the commits they wanted to be put on to master on a wiki page. And so you'd have these git shaw IDs on a wiki page and the release manager would carefully cut and paste those into git and do a git cherry pick. Which was great because that way the release had never been tested by anybody. What could possibly go wrong? And there was this small matter of overhead for the release manager who was constantly trying to cut and paste tags from a wiki page and paste them into the shell. So there was a bunch of overhead. This worked pretty well and we did this for four releases, four or five releases all the way through 1.7. I think so it was 1.3 through 1.7. It worked surprisingly well. I think I did a couple of these releases in this particular period and we're pretty happy with it. But the problem was again the release branch was completely untested by anybody. I mean there was absolutely nobody reviewing the cherry picks going in and there was no review of the code. Because again anybody could commit to master. So one of the things we wanted to do is we wanted to increase the level of review of the patches. We wanted to have some actual discussion before the code was put into the master branch. And we wanted to have people testing what was going to be released which is to say the master branch was going to be what was released and people were testing it. And so Peter Hutterer came up with a brilliant idea of stealing from successful projects and suggested that perhaps we should steal from the Linux kernel which has a fairly reliable release process. It's fairly predictable. Releases are in pretty good shape. And that release process is as follows. Only one person commits to the master branch. Linus' tree only one person can commit to that. In the XORV model we don't have a single person but we do have a single tree. The XORV master branch lives in a well-known place. So we don't have this fake anybody's tree or peer tree. Everybody knows where the master of the Linux kernel lives. It lives in Linus' repository. We acknowledge the fact that there is one central locus of development and it's the XORV repository on the master branch. Lots of people have started publishing their own treats however and this has been a tremendous feature. It means that we can look at other people's trees and review code before it gets merged into the main tree and people can test their own stuff and do development in their own repositories before the stuff gets merged into master. Obviously this is made possible because we switched to Git. The old models were all based on CVS where you could only do linear commits because branching in CVS is a disaster. Some people publish their own trees but other people just, you know, if you just want to publish a patch or two you just push the patch out to XORV to develop the mailing list. You don't have to publish your own tree. This means that people don't need any special privileges to publish patches. If they want to fix just one thing all they need to do is post a patch to the mailing list. They don't have to have an account on tree desktop. They don't have to have a public Git repository where they can push their patches. So the hope was that this would allow minor developers and people who just wanted to scratch a small itch to join the community fairly easily. The other requirement was that all patches had to be reviewed before they'd get merged. That was the plan. I think Peter proposed this first just for the start of the 1.8 release and we've done two releases with this model. It's not quite the Linux model. The Linux model, Linus really does have the last say. He's really the arbiter of style and quality and feature sets. If he doesn't like something it's just not going to make it in. That in a lot of ways is just because the Linux kernel has a lot better history of cooperation and collaboration than the X environment. We don't have a strong single leader in the X environment who everybody trusts implicitly like the Linux kernel. We have a slightly different model. General agreement is sufficient to have code merged. If there's code out there that a patch has been published for and there's a lot of people saying, yeah, that looks pretty good, then the release manager is obligated to incorporate it into the tree unless there's some obvious horrible brokenness about it. That means the release management process is largely the mechanical process. The release manager isn't doing any serious vetting of code style. He's not complaining about white space stuff. Really, it's just checking to make sure that the patches have sufficient review. He's making sure that the patches that are posted merge cleanly because if they don't merge cleanly, trying to do fix ups manually by the release manager is almost never a good idea. I think we had a nice Linux kernel merge recently that completely broke the i9-15 driver by adding a three second delay to a suspended resume because somebody merged and the merge wasn't clean and somebody patched it up and the patch up was wrong. The other important thing is the release manager must test the build after applying every patch or merging every branch to make sure that at least the build doesn't break. Now, one of the things the release manager is not expected to do is test the release thoroughly after every branch. The release manager is not expected to have every kind of video hardware and run every operating system. In particular, oddly, with the fundamental locus of development in X being on Linux and the XF86 driver model, the release manager is mostly building the XF86 back end. That's not necessarily a requirement and in fact our stable release manager for the 1.8 tree is actually the Macintosh lead developer. That's working out pretty well. But by and large, as long as the build doesn't break after each patch, you can kind of recover mostly. But if the build breaks, then all kinds of tinderbox things get very upset. So we try not to do that. So obviously the question is why did we make this change? Why change something that seemed to be a fairly long-standing tradition? We were making releases. The code was apparently working. However, we didn't ever make our release dates. You can read articles in various external news sites about how releases were months or years late. The X server 1.4.1 minor release was like six months later or something. Thanks, Daniel. Nine months. It was awesome. The other great part was that Gitmaster was often unusable. And this meant that our fine user base was unable to test the leading development stuff. So we had all this fine development going into master and nobody could actually run it because it would crash on everybody's machine. And in fact, people would merge stuff for the Windows branch or the Macintosh branch and the Linux branch wouldn't build or vice versa. So it was just kind of a disaster. The big problem though was that for large, major changes in the architecture of the server, there was very little discussion. And I'll show you, I'll actually demonstrate some numbers about that in a little while. And of course, because master was being committed to by the lead developers and that's where they were building and testing their stuff before having them cherry picked over for the release branch, the release branch got almost no testing. Did Fedora even put it in Rawhide? Yeah, exactly. I'm sure it wasn't in Debian Experimental. I don't think there was any release pushing out the release branch for testing on a regular basis. So we weren't getting any testing of the release branch. We couldn't get any testing of the master branch because it wouldn't even build. So we were having a lot of trouble with stability and development process. And you can look at this lovely chart and we'll show you how much releases slipped each release. Starting with 1.3 slipped a few months. 1.4 came out pretty much on time. But 1.5 was, you know, almost a year late. And I remember reading articles that were like, you know, the mythical 1.5 X server release or something. It was kind of embarrassing. 1.6 came out nearly on time, but 1.7, not so much. So, you know, we had some, obviously some release timing issues. Now it's okay if you don't promise to release on a specific date. But unfortunately, XR is kind of a key component in most Linux distributions. And one of the things Linux distributions like to be able to do is they like to be able to schedule what packages are going to go into their next release. And so they like to be able to count on a distribution, each of the packages distributions coming out kind of in a predictable fashion. And so we wanted to go to a more regular release schedule that actually happened on time. And so you can see with the 1.8 and 1.9 releases, 1.8 slipped by four days. And 1.9 came out on the scheduled day. 1.8 slipped over a weekend. I don't remember what happened there or something. But yeah, sorry about that, four days. And 1.9 came out right on time. We're hoping to continue this. I had originally thought that we'd want to do a tighter, shorter release schedule of three months. But when you look at the graphs following and the following on information, I don't know if we need to do that. Six months seems to be pretty good, the distributions. I did pull back the 1.9 schedule a little bit to align it with Ubuntu and Fedora and Migo, my corporate master. So we're happy when I align the release for that particular distribution. So we now are aligned pretty well. So our release happens a couple of months before the distribution releases. The distributions need the code. That seems to be working pretty well. Other people want to change when the release schedule happens. We can negotiate that. But we do have a nice, steady, hey, we've done it twice. Maybe we can do it three times. It'll be great. So for this talk, what I wanted to do is I wanted to put together a bunch of experiments. I wanted to build a bunch of data and kind of do a scientific experiment. So my hypothesis was that the new X-server development model will show increases in developer participation and improvement in your release schedule tracking without impacting the speed of X-server development. That was my hypothesis. So I wanted to go collect a bunch of data. So I did a bunch of... I used our find-not-much-tool to track email messages on the development list. I used git to find out when commits... what commits will merge into the repository, and I collected a bunch of data. And that was pretty fun. So it's important, of course, to remember the context of the various releases. And in particular, the 1.5 through 1.7 period which showed that huge variability in release dates was also a tremendously active time in the X-server development. Mostly what was happening in that time was huge amounts of code were getting removed. AJAX removed a half a million lines from the X-server. Go AJAX. I swear we should count removing code double to adding code in terms of quality of patches. That's awesome. And of course there were some significant new infrastructure additions. X-I2 was added, the multi-pointer code, stuff was added, the generic extension event code was added, just a huge pile of changes. So 1.5 through 1.7 was an extremely active period and we haven't seen those... that level of activity happen since then. So it's important to remember that context when you start looking at the patch volume in the next couple of graphs. So here's the cumulative commit count since 1.2, which was about four years ago. And you can see it's a pretty straight line but you'll note that after 1.7 it kind of tapers off. It's a little worrisome. I do really like the graph between 1.8 and 1.9. If you look there you're actually starting to see the more traditional kind of shape that you want with a bunch of commits being added early in the development cycle and then this slope off as you do a stabilization period. We're seeing something similar after 1.9 with a bunch of commits being added in the integration phase and then we're slowly tapering off as we're entering the stabilization phase. So things are actually doing pretty well. I'm liking that. The 1.8 release, we didn't see that particular development pattern. I'm hoping people are getting used to the new model and that we'll start seeing this more actively in the future. So that graph was just one count per commit. So every commit got to bump up, push the line up by one. I wanted to describe what that graph was. So here we are, here we have lines and commits per day over the various releases. So this was to try to normalize out the length of the development period so the longer development periods didn't show artificially high numbers but it is kind of interesting. If you looked at the lines added per day and lines removed per day and commits per day you can note that 1.5 and 1.6 and 1.7 and even 1.4 to some extent were very busy times and 1.8 and 1.9 were not. So the question is, you know, why is that happening? Are we crushing our developers with this onerous process? Are the commits getting better because they're smaller? I don't know. We're certainly getting a lot more review. So this was done by diffing the entire release from the beginning of the release process to the end of the release process. This isn't incremental additions over the release. This is a total amount of code added for the release. So even if a patch in a 1.5 or 1.6 period was reviewed and modified several times and the same code had a bunch of changes those intermediate versions weren't added. So that was to unfairly bias the early development process. Here's some interesting numbers over those releases. The aggregate amount of code changed over the release. The same basic statistics just added in kind of a stacked bar graph form. So you can see that in 1.5 or 1.6 just wonking great chunks of the X server were thrown on the floor, never to be heard from again. And at 1.8 and 1.9, 1.8 saw actually more code added than removed, which is a little frightening, but 1.9 went back to our traditional pattern of deleting code from the X server. Eventually the X server will be perfect and be zero lines of code. Exactly. Exactly. Well, it's provably correct. You know it has no bugs. It's awesome. That'd be great. Of course, the 1.5 or 1.6 saw just aggressive code deletion. Two entire sets of dumb frame buffer rendering code were pulled out of the X server. Almost all of the K drive back ends for Intel chipsets, for PC chipsets were removed so we don't have the Intel Trident driver or the Intel Mux, the K drive Mux 64 driver. All those were gone by this time. There was just no particular reason for them. Nobody was maintaining them. Nobody was using them. And the Xorg drivers were far better for those chipsets and we're getting a lot more active love, fortunately. Oh, the other thing. Ha ha ha. The best part. We removed X print. No sweeter patch was ever committed. Sorry that my notes are on slides. I meant to put them on note cards but operating open office is not my forte. Okay. So this graph shows you how many patches we were getting per release and how many of those patches were reviewed or even tested or acknowledged by another developer. And this was done by looking at all of the commits and then looking for a reviewed by or an act by or a tested by tag in the commit message. You'll note that in 1.3 and 1.4 and those releases, not so much with review. It was surprising to me to find eight patches in 1.7 that had been reviewed. I don't know who those reviewers were. They are clearly, you know, studly beyond measure because they actually had to catch the person pushing to master before he managed to get the code in. Hey, put a tag in. 1.8 and 1.9, obviously, reviews were required. Yeah, reviews required don't often mean reviews are all included. That's obviously largely to blame on the release manager, not me at this point because a lot of the patches that aren't reviewed are for the Macintosh and PC back ends, which have separate subsystem maintainers and frankly, I trust them to do their jobs and if they don't want to review patches before they ask me to pull their changes, that's their call. I would like to have them review it. Good point. I don't know. I'll have to go back and look at my shell script. There weren't that many merge commits, actually. Most of those are fast forwards, but yeah, I should check. Yeah, Adam's question was, does this also count merge commits? And I just don't know. I don't remember how I ran the script to figure it out. So obviously, with our new process, we're actually getting people to look at code or at least we're getting people to add reviewed by tags. I do know that in some major infrastructure work that went into the 1.9 release, both NVIDIA and I wanted to have some major changes put into the server and we were getting very little review from outside and so we agreed to review each other's code to get it into the server, which was kind of a nice tit for tat process. You scratch my back, I'll scratch yours. And I think I'm hoping to see more of that. I understand that the kernel often works like that, where you offer to review somebody else's code in exchange for them reviewing yours. Obviously, you need to find people who are competent at reviewing. So one of the release managers, little tacit jobs, is to look at reviewed by tags and say, is that credible for this patch? So unfortunately, you start to collect knowledge of who's reviewing patches and whether they're competent to review patches in a particular area and you kind of pen stuff that hasn't been reviewed by somebody who might actually understand what that change is doing. Also, Holmes' question is, what about patches from the same company? If you have one committer from a company and appear at that same company reviewing the patches, is that okay? I don't care where they work. The question is, are they credible and do they do a good job of reviewing? So for instance, the Apple patches that have been reviewed, all reviewed by people inside Apple, of course. Those are the people who can understand the code and make it work. I don't have any trouble with that. I think it's perfectly acceptable. In fact, some companies may have a policy of having the code reviewed before it leaves their doors and I would love to see that as a corporate policy. Yeah? Something we see every now and then in a kernel, I wonder if you see something similar is we get a patch from a random embedded company with about 25 sign-offs of people at that same company coming in and you have absolutely no idea whether that was actually reviewed, whether it's the management scene, putting sign-offs on and that sort of stuff. Well, I don't count sign-offs as review. I really look for reviewed by tag. And again, what I'm looking for is people who have done reviews in the past or who have committed code in the past. So it really is the release manager's job to look at the reviewed by tag and the person doing the review and make sure it looks credible, right? So you start to gather a history. I'm sure Linus does a lot of the same thing. It's like, oh, this was reviewed by somebody who's never worked in this area before. How could they possibly know? So there is a little more fuzz than it's totally automatic. So there's that general agreement rule that if the patch is generally agreed to be a good thing, then it should go in. But yeah, obviously, if you just get some anonymous patch with no history from that company and no history from the reviewers, then obviously it needs additional scrutiny. It's not going to be prevented from going in, but oftentimes if I see something like that, I'll personally review the patch. And I have spent a lot of time reviewing patches, which has been great. I mean, my day job no longer involves a lot of coding, so at least I get to participate in some way in extra development now. Here's a list of the number of contributors, so the number of distinct email addresses. It's not committed. It's the author line. That's right. So this is the number of authors of patches in the various extra releases. Obviously, we had one dot three. I don't know quite what happened, and I also don't know why Canoe Plot moved the numbers over. Yeah, I clearly have issues with that tool. One dot three. I don't understand why the number of contributors is so small. Because it's just one. Oh, OK. Yeah, one. OK. So it was just, that could have been me. OK. OK. Thank you. And then the one dot four through one dot seven, it had 80 to 100 contributors, and then one dot eight and one dot nine. Oh yeah, thanks Canoe Plot. Yeah. OK. That makes me feel a little better. Yeah, I obviously should have tuned the graphs a little better. I hate non-zero based bar charts. That's just inappropriate. Yeah, I apologize for the presentation of the data. So this is a little worrisome. We need to make sure we aren't excluding people. Again, obviously the one dot eight and one dot nine releases were much smaller, and so we should expect there to be fewer patches, but I'm not quite sure why there would be fewer contributors because we should still be seeing the one or two patches coming in from external people who have a particular bug to fix it. Yeah, Tim? Have I tried dividing this by day? When you divide this by day, yeah, it actually normalizes pretty nicely. But I'm not quite sure I understand why the number of contributors would scale by the number of days. Although maybe, you know. Oh, so were X developers moving jobs and so we got them doubly applied, doubly. Yeah, that could be true. I didn't obviously I didn't do any cleanup job like Greg H. and John do with the kernel contributors list to try to merge email addresses for the same people. Are you taking questions? Oh, sure. Is there a chance that some of this is down to business-based companies contributing through a single channel or with the way you're pulling in the commits from Git and the information that you can tell that within a given organization there are multiple contributors? I am using the author tag in Git. So even if you have somebody integrating patches from multiple people unless they're lying in their Git merging stuff, we shouldn't see a change in this. So this may need some more research to figure out why exactly this is happening. It'd be nice to know which people, find out which addresses weren't included in future releases and ask them did they not have anything to contribute or what. And it's perhaps the other side of this that we're seeing that level of consolidation in the graphic space. You're not seeing contributions from all these random developers from older graphics cards anymore like that stable. It's important to remember this is just the core server, not the video drivers itself. We haven't changed the development process in the drivers. Those are still managed by individual teams. Yeah, it may be the ex-server is just getting kind of done and needs fewer changes. Yeah, it would be really nice. Yeah. Now, this is my favorite chart because it shows the success that we have had in the change and development process. So we hope to increase discussion at least getting, by requiring that patches appear on the email list and get review before they're being, before they get committed into the tree. Our hopes were realized way more than that. Not only did we see a huge, huge increase in the number of email messages and threads about patches going into the tree, but the other non-patch related email skyrocketed as well. This is, you know, a success disaster. Unfortunately, I used the not much mail program so lots of email is not a horrible thing for me, but look at the email we got. Yes. Yeah. This is all mail. All mail to Zorg. When was the, when did the Zorg list kind of die down? Was it? Okay. Okay. Yeah, I tried to get both. Just, you know, a tremendous increase in mail, especially looking at patches. So a very successful increase in the volume of email. So the research method that I tried to apply was I tried to figure out what graphs I was going to generate before I collected any of the data because I didn't want to get the results that I was going to present by the results that I had seen. Obviously, I have a strong interest in the new development model. I think it's working well for me, but I wanted to collect data without biasing that in any particular way. And then I presented the data that I collected without any bias or tried to. So is our new development model a success? Obviously, we've had, you know, a dramatic reduction in the amount of code going into the X-server. Is that because of the new development model or is it just because the X-server doesn't need to change as much? Or is it just busy reading the emails? Yeah, busy reading the emails. What was code there? There is half as much code there as there used to be. So obviously, you might expect a dramatic reduction in the amount of patches going in. I could have parsed out which patches affected the code that's been deleted in the earlier releases and figured that out. So the question is, have I tried to factor in the kernel development? Again, this is just the core of the X-server, not the video drivers. So this doesn't have any, there's no impact here in the shift of the code from drivers to the kernel. So this shouldn't be impacted by that at all. There's been no change in the core of the X-server to support the new kernel device driver model. We're hitting releases on time. We're getting more discussion about patches. I'm pretty happy with it. Releases seem pretty stable. The distributions aren't screaming at me for missing release dates anymore. So that seems like a feature. I don't know. I think we certainly met Peter's original objectives and I hope we're not, I hope we're able to keep going. So with that, I'd like to open up for questions and comments. Here we go, come up. The first question I think would be about reviews in general. First part about it, what are their specific criteria? You put out what you are and are not looking for in reviews, what type of things to try to catch, or did you just let that kind of happen between the individuals? Peter, did you have some specific requirements for review? Yeah, I mean, we tried to have people understand what the Linux review process was. We wanted to have the code reviewed for style and correctness, and that the patch was a good idea. So we wanted to make sure that the code was something that should be in the X server as well as being correct code. So as Peter says, we have an official statement of what reviewed means. And then I know one thing that happens at various companies that require reviews, occasionally you might get an individual who puts code out, has it reviewed, but doesn't heed anything. That is, they'll put the reviewer's name on it, but not incorporate any of the feedback or anything like that, which doesn't sound like you have to deal with in this circumstance. I haven't seen that. I think the reviewer would pretty rapidly discover that that had happened. Yeah? I can think of one or two cases where that kind of thing has happened, where somebody has put a patch on the list, gotten some review, re-sent it to the list, making one of the changes out of the 10 that we suggested, and then eventually what happened with at least one of these patches in particular was we just ignored that person studiously and somebody re-wrote it correctly and put their name on it. But I mean, most of the review is, you know, did you do an obvious allocation area here? Is this a use after free here, basic C kind of stuff? Is this the kind of thing that's easy to get wrong? Because a lot of the higher level stuff you had to have already known in order to do that patch at all. It's my experience anyway. Yeah. There's another thing to it, is that if you come in as an outsider and you send a patch to the list, it usually still goes through someone else. So anything input related usually goes through me. So even though I might not always have to review it by us, look at what I merge before I send it to Keith to pull. So, you know, there's more review going on that's historically visible. Well, oftentimes a sign-off line by a subsystem maintainer also means some level review. And that level review is going to vary based upon the person who authored the patch. So if you have a person who authored the patch and reviewer who reviewed the patch, both who are well known to develop correct and useful code, then their code may have less review by the subsystem maintainer or a system maintainer than code by somebody new to the community. Which I think is appropriate. I don't think that presents a high bar or any kind of bias towards older developers. But certainly as you get more of a history with the community, you could either go up or down in terms of having your patches have more or less review. Certainly there are some contributors in our community who get students who reviewed even though they've been contributing for years and years. Just because we've had issues with their patches in the past. Yep. You may have mentioned that the release maintainer has got to make sure what's in the main branch, the main release actually builds so if you've got anything there around automating that process to trigger builds, sanity check. I certainly tight make after every merger or commit on my machine. Unfortunately the X server is now small enough to run very long. And we also have a Tinder box running on multiple architectures that check the build more thoroughly. So if that breaks, that's caught even more assiduously. It checks a bunch of different configurations. So yeah, we have both. The maintainers is doing a build and the Tinder box is catching things. Oh, do we have the Tinder box running the X test suite as well? Yeah, I didn't think so. And Tim has a question as well, right behind you. Have you tried looking at the number of commits each unique contributor kids? Because that would give you a better idea about the number of people who are just contributing say one or two patches. I didn't do that because I didn't think I would find anything interesting. I should have just to make sure it was the usual distribution that we see from projects of this nature where most of the patches are by a few people and a few patches are by a lot of people. I expect to see the same tail-off. It would also tell you if they are being new contributors, right? Because you can see the names and compare between releases. Yeah, lots of there's, you know, one of the nice things about having open mailing lists and open archives at this point is all this you can collect all kinds of data. I collected a bunch of stuff that I thought would be interesting, but I didn't do that one in particular because I kind of knew what it would show up as and it would be, you know, I probably should have because it was the usual distribution from any open source project, but I had not, no. Have you tried anything to sort of encourage more test cases to be come along with the new code or modified code? Yeah, that would be lovely. We have a test suite that tests the old core protocol extensively which applications don't use at all. And we have all these new extensions that applications use extensively and which is tested not at all. The problem with the test suite is it's really, really, really hard to add new test cases to the old test suite and nobody has bothered to write a new test suite. So... Oh, yeah. So the old core test suite doesn't crash the server anymore at least, thank goodness. Awesome. Such a... Yes, the other thing that Peter's been adding is internal checks to kind of take some of the input code and test it in situo without having to run it on a machine. And that's been very helpful to check the correctness of the input subsystem. Yeah, I've been writing a lot of storage engines underneath the database to do different things to test the upper layer. Yeah, exactly. Time-consuming, but once you actually refactor code to not make your eyes bleed at some point. One of the problems, of course, is that testing the X server is really, it takes these cards and it takes input devices and it takes every combination. Yeah. And so, you really need a giant pile of machines and a lot of the correctness is it actually displaying the output on the screen. And so it's actually fairly difficult to automatically test a graphic system. And so for my Intel video driver, I actually have a QA team who sits and actually runs applications tedious as, you know, but when you pay people, you can get them to do that. Has anyone tried like digital capture of the output and like... That would be awesome. So, who's our next release manager? I'm willing to keep doing it. As I said, it's one of the main concerns was that the release manager would try to become, you know, the leanest of X. And it's really not the same job at all. Leanest really is much more of a style manager and a content manager than the X release manager is. The X release manager is all about seeing that there's consensus and mechanically merging stuff together. It's much more of a secretarial role than a manager role. Yeah. So, you know, I keep offering to do it because I frankly enjoy it and it gives me some visibility into what's going on and I think I think I have a longer history with the code than most people. So, when I need to review code that hasn't been that nobody else can review, I'm kind of the reviewer of Last Resort and so at least there's somebody around who's going to make sure that all the code is reviewed before it's put into the server. So, if people aren't frightened by having the continual, you know, Keith Packard X server release, I'm keen to keep doing it. How many different people send you pull requests? How many? That's a good question. That's actually pretty easy to tell. I can actually see that just by seeing how many remotes I have in my X server right now. I wasn't volunteering for the release manager job. I was happy to let you keep having it. I was just wondering if that was a burnout thing for you, the way it had been for me in like one five. Okay, so this is a list of people who've ever sent me a pull request. 34 people. Oh, sorry. Yeah. I don't know where this screen is. Yeah. So I just did get remote. Origin doesn't count. Yeah, Origin doesn't count. But you can see how many people have sent me pull requests in the last couple of years. It's quite a long list. Apparently 34 different people. And oftentimes that's just for one or two pulls. You know, it's not for a long history of pulls. But, you know, frankly, I'm willing to merge from pulls or patches. It doesn't matter to me. At least one of those is not a human. Transform is not a person, I'm sure. I think that was, but most of those, yeah, those look like names I recognize. Yeah. I think it's a reasonable approximation to the list of people who have asked for pulls. Okay. That's it, is it? Thank you very much, Keith. I do have, and you already have one of these. So,