 Before I start one management, the desktop search book will be after this talk in the KD and Nome Room, we move from 11 o'clock slot to after this talk, KD and Nome Room, if you're interested in desktop search and this sort of stuff, okay? The interesting thing to talk about is how we get a stable kernel, how I try to get a stable kernel from things like this one. What always happened with the 2.6 kernel is that, let us decide, is there having a stable kernel released with small changes? It was going to keep doing larger numbers of changes each release, and Andrew Mortals would maintain a separate kernel tree where new ideas are tested. So things go into Andrew Mortals' MM kernel tree that might be fixed, but some things can go in, some things can go back out here because they don't work. Other things can go through the stable kernel. So each 2.6 kernel, unlike 2.4, is merging something like 10 megabytes of code release. Now, everyone here who has ever written 10 megabytes of code will notice that 10 megabytes of code always has bytes in it. 10 lines of code usually has bytes in it. But with this, he starts off by merging the big changes often from Andrew Mortals' kernel tree. Then he orders to shrink more people into testing it, he calls it a release canvas. And this gets smaller changes added, or things which you don't want to have to change are necessary to make fixes. And after four or five release cabinets, Linux releases 2.6.8, 2.6.9, 2.6.10, whatever, this is where the problem starts. Because the next one. And so we were seeing a situation where lots of people downloaded the new 2.6 kernel. And we know what's going to happen. Large blockchangers, most people don't test the release cabinet or development code. That's what other people are for. You test your faster other people's data. But that's not the worst. So the release cabinet will pick out a lot of the really stupid bugs or things just playing bad ideas. And Linux is very good at figuring out, oh, we should go in this direction. This is the right long-term solution for problem. He's a terrible, I don't even agree with that statement. All the level of medley data would make the difference between creating a piece of really great software and making a piece of really reliable software. All the tedious cross-checking, finding out which little details have been missed, beta testing are not his particular hobby. So as soon as the release comes out, within two or three days, the mailing list will have large numbers of emails, most of which are about the same small number of bugs. So a release will get out, and two or three days after that there will be 150 emails saying, I have got a VIA, such as a chip set, my computer no longer works. Or I have an iPod Mini, and every time I plug it into my machine, my iPod Mini crashes. So, well, level bottom down, all those Apple software. So these things pop up. Not all of them pop up because of bugs, some of them pop up. As in the iPod example, because a new piece of hardware comes on the market and we do something which triggers a bug in our heart. These people keep finding security holes. The good thing about most of the security holes nowadays are found by verification tools. They're not found because somebody has rooted a machine. But as soon as a security hole is found, and it's particularly become public, you need to fix it. So we need something between the releases, something to actually take up these critical fixes, and to pick up the security stuff. You don't have to get a work done. You do this for Fedora. A lot of the AC in Kernel world really overlaps what we do with Fedora, but the users of Fedora. And Susan do this with our customers, and Debbie in the doing. Everybody is doing the same work. The early problems you get, the ones you see particularly off mailing lists, are normally very easy to fix. Because as soon as the release comes out, all of these bug reports start to bear. At which point the author of the rather changed almost immediately, and almost every time it looks, it goes, whoops, that would be my mistake. And now post effects. And 10 minutes later, the fix is in the development tree. But it's only in the development tree. These early things are often easy, because you take the base tree, you take 2.6.10, you look at 2.6.11, the first development release, and you say, well, apply that patch, apply that patch. You put a small number of changes, a very small number of changes as in standard kernel. And all of a sudden you have sort of just a lot more stable, a lot more reliable. Because the bugs, everybody hits, almost always get found in the reality. So those are relatively easy to deal with. There are some level problems that you think are, this is perfect. Move this fix into my stable kernel tree. And it's followed four hours being drugged. Don't put week-long discussion about the correct way to fix the problem. And some problems are genuinely hard to fix. So during the 2.6.10 release, somebody notes this, that certain file systems do strange things with 64-bit machines, where you do read or write system calls, and the length is longer than 2 gigabytes. Yes, people occasionally did this. And one or two file systems had used int where they went long. It never mattered on a 32-bit machine. And fixing that isn't a case of acquiring a small patch. There's significant work in the kernel to do the job properly. And for some of those, you have some sheets. And you apply very, very only fixes to a stable tree. And the happy knowledge that someone will fix it properly before the next release. And it does get messages in there bugzilla along the lines of, I've just found this huge, great security error. Public bug reporting system is clearly the right place to put it. Several bugzillas now, when you mark it by this security, also mark it as private, ultimately. So that comes from people just reading code. A lot of things in bugzilla actually come from students. Because they get set up undergraduates or read the following part of the Linux kernel and explain how it works. And they read it, and they decide it doesn't. These things, presumably, the other 20 or 25 people in the class all thought it did. And some of them guess, well, it does this, it does that, oh, that's interesting. Some of it comes from people who are just looking for security error. So there are people who are very interested in security. They may see a security report from one operating system or another program. Because somebody was looking at their graphics programs and he noticed there were a lot of graphics programs which took a user input value, multiplied it by a constant, and then did a memory allocation and copied data into it. Some of them finally realized it was 2 gigabytes long and you multiply that by 2, well, it comes down to about 40. And then I'm going to copy 4 gigabytes of data into it. And someone saw that, that's interesting. And so we then start to get a pilot report from someone saying, I've been looking through the kernel for bugs similar to this, I found this, and this long list of things starts to appear. Because certain bugs, security bugs, everybody makes the same mistake. And it's only when somebody first really thinks about it that you see the problem is in every single piece of code. So I'm from a thing called vendor sec, which is the multi-vendor security list. It's a way of privately discussing security holes between vendors so that we can do these, like, coordinate releases so that everybody has an up-to-date kernel before we tell the rest of the world, here's an interesting way of completely breaking it. It's not the problem as much with minor security holes or local security holes. But if, for example, you've got a remote way of crashing machines, then you want to fix that before you tell the bad guys about it. Because the chances are they won't actually do it properly. It's also a source of security fixes. Sometimes people say, I found a security hole. Sometimes you get emails that, I've been looking in this code and I'm trying to understand the way it doesn't check that. Mailers and that's not what we're frequently worth, and you get private emails from people. There's another actual security one we're starting to see now. There are one or two people starting up companies specializing in verification tools. People like Converity, and they have been doing a lot of testing on Linux, because the source code's out there. And so we get reports from people like that saying, our verification tool is that this is wrong. And the tool called Sparse, which is something Linus and others have been working on. Security errors mostly are easy to take. Most of the security errors which turn up are there all before you don't check that something. Or, often the error cases, if the following error, you run all this code, and you do what's ever tested, and forget that, so we find this code gets some memory. If you didn't get the memory in return, don't bother giving the lock back or all the other things you should have done. And people don't notice that this case is rarely happening in real life. And then somebody says, well, I can force this to happen deliberately. And it's also a security issue. It's good that most of these are easy to fix, because obviously security things are things you want to get into a stable kernel very fast. So when I'm doing AC kernels, I try to make sure there is a new AC kernel as soon as we're sure about a fixed security model. So that it's out there for people who want to run it. But let us fix it. If you're working in a development kernel and some piece of kernel is fundamentally wrong, you throw it away and write a replacement. So it's not something you want to merge if you're trying to create a stable kernel. At the same time, you have to fix these things. Do you look for the future? Do you do a small fix? And what I've tended to do to try to keep a stable kernel is put small, something really, really horrible into the AC kernel. And I can do that because I know that the next release, I won't need them. Whereas if you're trying to do the development, the real development of the kernel, let us all the time try to say, if I've got this horrible fix in, I will have to look after it two years or three years, or until someone is stupid enough to have to rewrite it. So he's very keen to have a maintainable code, whereas what I need for the stable AC tree is to have code that works. And you should have all to use the space behavior because you don't want to fix a bug to find you've broken half a dozen user applications. We've had one of those recently. In 2011, the release calendar breaks a couple of applications. And so that's changed a lot, but I've now gone back to that. Why is this patch in the kernel a break stuff? Over time, we have to break applications to security reasons. So a lot of people got upset about CD burning in 268 and 269 because we would change the way certain things work, to bend the CD burning application to work properly. On the other hand, the bug we fixed was one where anybody who could burn a CD was also allowed to do things like flash or firmware as they drive. And if very few thought that while undergraduate students should be allowed to write CDs, undergraduate students should not be allowed to turn the CD wrong and drive it into junk. The next question you always have to ask is, does the error actually matter? So a lot of the error reports and things get fixed in the kernel and there's compliance. So someone will fix the problem, but the kernel returns the wrong error code if you are exactly at the end of the file and you write two gigabytes of data to the file and the disk is full. And then they will proceed to quote line and paragraph and posing standards. So all of those kind of things I don't put into the stable kernel. Because if it's been like that for ten years and no one has noticed, it's not a problem. Those which are root only. A lot of bugs are in nest use code or in system configuration code. And so you just go, if you're super user, you can crash your machine because of this bug. If I'm super user, I can type power off when the same thing happens. It's another kind of bug. You've got nothing you can avoid if you're stable 3. You get harmless with the other side. Some of the 2.6 kernels sometimes reported wrong values in the amount of free space on NFS auditions. And I didn't put that change into the AC stable kernel because the change would only be tested with other NFS changes. And it didn't really matter that occasionally you've got a wrong number from DF. It's a bug, but it's not serious. Little thing you fix. All of those changes can have surprising effects somewhere else. Certain parts of the kernel are really bad for that. Particularly in 2.4 there was a problem that every time you fix the virtual memory code solve one problem, you fix the virtual memory code to an oracle one properly and then quake them while your desktop is incredibly slow. But in turn it gets harder. So as Linnus keeps adding changes you start to get the point where you're sort of 7 megabytes of difference between the Linnus kernel you're applying fixes to and the Linnus kernel everybody else has worked on. The thing about magnitude. You're talking about a difference between the 2 kernels which is probably the size of dots. We were reasonable about the development and you're looking at a change and the change depends on whether you're testing against a vast number of other changes in the kernel. In fact somewhere you even have to rewrite the change. The good thing about it is obviously as time goes on all of those common bugs shouldn't be hit. It doesn't work with security because security bugs just turn up when they feel like it. But you can often avoid applying changes later on because it's a bug very few people hit it and it becomes more risky to fix the bug and to leave it as it is because you know what the bug is it's not too serious but you don't know what the fix will do. You know what the fix is supposed to do and you can test that it doesn't mean it's supposed to do but you have no idea what the side effects may be. And those are going to be very, very surprising. You keep going on trying to back forth stuff for a very long time but more than what release it gets really, really hard and you start to need teams of engineers big test environments simulations set up and all sorts of fancy stuff. So for the 18 kernel as soon as Linux produces a new kernel I start working on that and drops forth for the old one. It's just too hard to keep 269 going for 3 or 4. The people who do that are people doing the get-priven as kernels as Suze overrated by kernels and it takes them teams of engineers to do this work. There's only one of them and my main test suite is a BZ flag so it's a little less thorough than some people's. The test suite, the graphics are working the 3D is working the networking is working the file system is working and program is going to be working. It's a very complete test suite and so the thing I'm trying to do as well is to remove change and as soon as a buggy really takes make sure that it's tagged this buggy really takes. So the next Linux release it's another patch you can get rid of it does not have to try and force speak into it as far as necessary because in the ideal world which we're slightly far from at the moment every single AC release when you went to a new kernel would starve and blank because everything would have been fixed in the previous kernel. There are things taking too long to fix so there are hack fixes that have carried on for a while but that's what we should be. There is a mailing list only a few people seem to be subscribed to which is a mailing list of every single change set whether supplies to the kernel. It's thrilling reading. It is the most possibly the least exciting mailing list I have but it's very useful because you then take each change set classified. So I've done every change set into evolution because it receives email that has a search function that works and as I go through the trial and tag so you look for changes which are just cleaner so a lot of kernel patches are just people tidying up code and making it more maintainable but these don't matter for stability. Then try to filter out things but drive updates. They're not fixed where someone has said it's now faster, it now supports more hardware. These are things which are not both fixed so you can drop them out fixes. Because let us have this bad habit of fixing security holes quietly and he's still under the mistaken assumption this is a good idea. Unfortunately there are quite a few people who read all the kernel patches and several of them read all the kernel patches to look for security holes. There's a thing going on there to have a proper Linux security list so that when Linux fixes security you can only actually tell the rest of us. They should know in the short term and from this you remember the collection of patches that look interesting and you then try and find an excuse to get the number down as low as you possibly can. The other tool I have is a program to look at which sequence of patches have been looked at. So I can look at the patch and say that fix looks like it's an important both fix but also ask the question what other change sets were applied before this in Linux 3? What other changes hasn't been tested with or might and rely on? And sometimes you get this long series of changes you have to go back and work through them try and understand do any of the other changes matter how do they connect? Problem reports, it's relatively easy because of the base of 2.6 kernels that you can literally just count the number of emails complaining about each bug to determine which are the really important ones. In the day after a release you just have to look at the subject lines to see where the main problems actually are. The second thing to try and identify for fixes are problems that only a few people were reporting but whose effects it had. And there are two kinds of problems like that. There's one which caused silent failure. It doesn't work but you don't notice it's mostly particularly bad. It only looks like it's working but it's not. Of course things with corrupt data because you don't want to find that the thesis on your disk partially consists mostly of random numbers or that your database has interesting random things in it. It really, really does upset people. So there's been a corruption problem that I try and fix as fast as possible. Because those are alone, they're not saying they're only jumping if there's a machine touch. They're really, really important places. And they may be affecting a lot of people with their devices. They're quite easy fixes. But sometimes you'll see changes where you take an existing driver who had new PCI identifiers. In other words, the new piece of hardware has come out which works precisely the same way as the old hardware. Those kind of fixes do get applied because one of the great things about a change like that is you know it can't work. You think you know it can't break anybody else's system that you have a new piece of hardware. Which is, start with the most critical both cases. So if you look at the first AC release, the goal is to fix the critical things any security thing that could come up immediately, which is unusual. And to try to avoid anything where you get sets of changes which interact. Because you always want to release which is debuggable. So for a two complex sets of changes jump to the NFS code you want to do the first most critical one, the first release, the second one and the second release. That way if one of them causes a problem you will know where it breaks how you've had the problem. To spread fixes out. Again for debugging. If you've got lots of fixes which aren't urgent you can just slip them in with individual patches rather taking all of the non-urgent ones because of this gigantic patch file you then can't debug. Again, if there's a security fix it's a bad idea to make security fixes with other things that you'll have less sure about. Because you never want to have users in a situation where they've got to choose between a kernel of the security hole and a kernel that doesn't work. You always want a kernel without a security hole that works. It's really, really important that one exists all the time. So when you fix a security hole sometimes I actually have another patch half-prepared you just throw away all of the changes so far just put the security fix and then release that and then go back to what you were working on. Tools and evolution simply because of the search function. I've done detailed analysis of tools if, patch, usual things. I don't use BigKeeper for licensing reasons. I don't use various tools like Quilt probably should use because I just don't have time to play with the level of the kernel. In terms of goals, this is where we're trying to go. Fix the bugs to the user's head. The immediate, Linus, why did you release those? That's just out there by the bugs. Once we have the relevant developer hiding in the table, they know why me. And to defer high-risk fixes so where you see straightaway after the release there's a minor problem. It's possibly not the minor problem that's fixing immediately. The minor problem might be one which is complicated. Then, over time, what you know to find is, there's an initial rush of fixes where people say, oh, it doesn't work on this hardware. You've broken this part of the SCSI layer. If I have an IDE or something horrible that happens, all those things. And that just quietens down and you get just the less serious, the less common bugs. You can then start to test things that are high-risk fixes. Sometimes I put out AC patches and say this is a test AC patch unless you've got the following problem and probably don't need to try it. And enough people will test it just to be reasonably confident and so on. And they say, don't make security bugs and rescue fixes that really, really upsets people. Try to make sure that you there's always an answer to what stable kernel should I be running. So you have to definitively say I need the right kernel, what is it. One of the sort of things is track all these fixes, persuade Linus to fix the bugs and also once the AC stuff is basically stable try not to release anymore because the more sets of changes you do over time the harder it gets to be sure those changes are correct and well tested and the less people benefit. It's sort of tricky because the majority of the users will say it's working for me down to great but if you are the one person who continuously hits the bug then obviously you're unhappy to be told well we don't think it's in everyone else's interest that we solve your problem. There's a great way to go and sort of you can reply to me like I say, I'm not going to fix this but here is probably the correct patch. Just apply it to your kernel and the output will have to take the risk. We get obscure architecture bugs in the interest of sanity I don't have to fix this for mainframes, the IA64 platform and the spark to my tree because I figure that both the spark users, the IA64 and the IA64 users are probably capable of maintaining their own kernel and all the mainframe people you talk to won't run anything IBM has personally signed off in writing there are actually now a few mainframe flatware people which is interesting and there was somebody trying to build a mainframe gen2 at one point I just don't imagine building because it's not a very fast built machine it's not to build the open office on an s319 it's not nice but there are such small communities but they're trying to be able to fix this for them which overlapped the common code again, it's not really interesting for the majority of the users we also have one or two areas which come in the fix one don't break something else category the virtual memory is the most notorious of this, every time you fix the case where the virtual memory system misbehaves for somebody you will break somebody else in the code it's getting better but it's a long and slow process and stable fixing kernels are not the placement by this game essentially the virtual memory system is a chaotic system which means that any small change you make will have random, bizarre and unpredictable results that you cannot model or analyze as people say you fix the virtual memory system has been wrongly accounting this for nine months it's an obvious and clear fix the problem is the rest of the virtual memory system has been tuned to the fact that this number is wrong somewhere else here it seems to work best if you multiply this by 2 and a half I don't know the reason it works best if you multiply by 2 and a half because it's counting wrong in the first place but to understand all those dependencies it's really really difficult because virtual memory is one area up to high risk areas things like disk device drivers so in trying very very hard where I'm doing patches not to change these to low level disk code so then if you chase the tuning algorithm for an IDE drive you don't want to do that on a stable kernel because it's one of those things that will work perfectly for everybody then somebody who eventually a week late will say it doesn't work it's going to go on max or drive and you look at the code your code is wrong but everybody else those kind of things users can lose data are just too risky in most cases to fix so we really really try and avoid that there are one or two other areas where you fix one bone and you break them off some of the low level code could be like that where you have to wait until the bone settles down in the development tree someone is fixing the signal and having a week late it normally means that there will be about 10 patches over a period of a week before it works again it's very subtle, very complex code you sort of learn I don't think that's the whole story you have to wait until the whole story before you know what you're back for what I'm trying to do is to have a stable conservative tree the AC tree is not the right answer to this because there's only one of me to start with there really should be three or four maintenance reviewing lists and patches partly because you've got several people looking at each change because of much higher chance that one of them will grab people against them but that's wrong there's a side effect it's not because people go on holiday or the conference is and you don't want the stable kernel spot because someone's at the conference is not to be able to get your security things because the security maintenance is on holiday because the bad guys don't take those models and we badly need a more formal process for this and I'm still trying to kick people into creating one we really need a better security policy for both members and those of us so that the vendors can find out so that people who need to very well find out so that things they can't release to the public in a more organized fashion because the truth of the matter is that bad guys do read every one of them as patches I also read the one as patches so I pity them the effort but they will pick out security fixes that let us and other people think of quiet fixes nobody will notice in part because security fix is how we definitely look I also have people just use that in common sense so there are certain people who mostly conflict with security fixes so you look through the thing and it will have some ears and nose livers comment like correct handling of signed bid on the block as it's from a designer it's like is this somebody who only commits security fixes therefore this is a security fix you just can't for the high fees kind of thing because some people try to and again ultimately I would still like to get to that where we have 2, 6, 9, 2, 6, 9, 1, 2, 6, 9, 2, 2, 6, 9, 3 a proper process for this in the meantime I can carry on doing it myself we eat it for fedora anyway so for red hat module it makes sense to do it and it helps with other vendors because we can work for having a public patch it means other vendors are working together like we work with people in different distributions who are doing the same thing and say oh we need to fix this problem we need to fix this problem that's really the end of the talk part I'll put at least some time for questions do we have any questions? can we do any questions? sorry to see I have to hear it first ok if my line is using 2, 6, 8, 7, 9 it can let us talk about the line when the kernel is set I think essentially it's use when the kernel begins not my problem it should be less the problem the thing is though it should be somebody's problem and the question is too how do you organize it? there is a lot of review of the changes what's the limit of the code review part of it a lot there's a lot of review of the patches that go into the venus's code there is not enough review of some of the fixes I am doing to try and fix stuff up it gets some review because the various distribution people who use the AC codes around once or use the check list make sure they've also got the phones in I do get comments from them they get submitted for use in terms of red hat in terms of red hat people will review it as well so I put changes in and I've got a mouth of David Miller started with various swear words that seem to explain why things are wrong so they do get reviewed to an extent, not enough in terms of the kernel in general one thing we have found with things like the code coverage work which don't get seen by humans but gets seen very well by verification tools and that's really important because verification tools they run them all day they run a verification tool 24 hours a day it doesn't go on holiday and everybody can use it so the verification tool side will be becoming more and more important I think that's a trend not just in open source either there's a lot more interest now in formal verification software yes Big Keeper Big problem with Big Keeper relations is that Big Keeper Big Keeper itself is a piece of proprietary software now in our conversation people give you good rational reasons why it's a piece of proprietary software why it doesn't work as an open source model and you do a good argument about that and that is satisfying I use other pieces of proprietary software Big Keeper on my hand you're allowed to use it in certain cases as open source software but only if you're not doing work on version control systems now I occasionally have to do things like patch CVS and I'm not going to give up my rights to do that to use a piece of proprietary software partly this case from as well in my case is that Big Keeper is actually not very good at doing some back porting work I do because Big Keeper understands everything in terms of change steps so if you say I want this change step what Big Keeper does is say that means you also need this this work back to the dependencies now when you're back porting what you're actually offered to do is say I want this change step I don't want the other step as a version control system the software is not capable of producing whether that is a good idea or not it can't handle it it takes a human to say yes this is a valid thing to do so in that respect it doesn't help me there are other tools like Quirt which I don't do which I should learn how to use I'm just bad at tools sometimes obviously what's wrong with Quirt? Quirt currently and I don't know if you can do Quirt because I want that as my change but Quirt currently doesn't really matter there's no full free functionality in Quirt at the moment at least I hope for the rest of it no it is just happening I need search functionality male functionality they cover both ends in the same place and evolution just happens to happen it's not that evolution is particularly good search tool evolution is best for the best mail program evolution just happens to be the tool which has all the right bits I need by using such ads so simple as that Linux is essentially very stable but are there subsections in the next so this is actually not very good it's fragile bit like the virtual memory system and personally it probably needs rewriting of some stage in the future but it works well enough at the moment can someone repeat that or half way down it's like no this is the radio buddy where's my movie are there subsections when you look at them or when you see a patch you go oh my god it's actually a crawling pile of horrors other sections of the code when you look at patch it's a crawling pile of horrors it works that would be the teaching way subsystem the old IPE layer oh yes there are a few of those in fact the IPE layer is essentially so bad in terms of maintainability but one of the current plans is to use a new serenade IPE to drive all of the old style IPE pull the drivers across and delete the old IPE layer that's never written in the first place and those are a problem because patches to the IPE layer code is very very hard to be sure they're safe because locking is so confused it's complicated same with the TTY layer TTY layer needs to volunteer to rewrite it Ted Cho wrote it for the 1.2 kernel in the day for 1.2 kernel it was really good very very fast very clever optimizations which relied on the fact that you had a single processor computer it's not moved on a great deal since then other than kind of emergency fixes to make it more multi-president or most of the emergency fixes we found a few later we should have done a long time ago so yes there are some really really grungy horrible bits of current the stability is that the more bizarre and peculiar your system is the less likely it is to work well because a lot of drivers are found by people reviewing code and they review code they care about or by using it so the PC platform is very very stable if you go by absolutely generic boring PC it's probably the most stable Linux configuration you compare that to Linux on an obscure mix platform on the backstation and there's very few users so there are lots and lots of latent bugs, no one is hitting it there's sort of a lesson note the more your hardware is like everybody else the better it takes visibility ok, more questions? if you're back porting 262.4 how nice of Marcelo Guy doesn't very hard to do for a lot of the code because so much has changed in the core code some parts of the kernel it's very easy because the kernel is still common but a lot of stuff you really can't back port that far the difference between the 24 and 262 kernel I think it's on that 40 megabytes it's huge, we've rewritten more between 24 and 6 but on the size I think of the 2.0 kernel it's hugely different so that kind of back porting isn't very hard the problem with that is they're trying to track security fences one of the reasons we have things that the vendors like mailing us because somebody finds a 2.6 bug and you look at 2.4 has the same bug but the code around it is different so you've got to fix it differently and also it may be the case that 2.4 has the same bug in different places so there are people who do that obviously the vendors because they do 7 years of boards have to do this and this turns out to be done on the vendors like mailing us or between various maintainers the development structure that you've been explaining here is much less rigid than comparative development structures in the BSDs like open BSD and it's ironic it almost seems to be more anarchic at the top than it is further down because you've described how the vendors apply discipline and there's a quite amount of pain that's gone through there so what do you see is the clear benefits of this fairly anarchic The thing which started the 2.6 model on this idea of continuously running updates is that with 2.4 there were a lot of problems that didn't get fixed promptly because everybody was so scared about it and he said everybody was so scared about the size of patches and so I actually ended up doing a 2.4 AC Journal which is different from the 2.6 which is all patches that I was just forgetting I listed it as a really big patch but it made things more stable and other people had similar experiences so people started to wonder whether in fact we were making the kernel less stable by not applying patches but we had it modeled wrong now we're trying to get a model we're starting to understand that perhaps for some other bits you have to fill in as well if you haven't anticipated we need to have these point releases but it's very explaining we try things and see what happens once you get to the enterprise vendor level it gets very rigid internally to the company you have policies like a patch you propose two people have to accept the patch acknowledge the patch sign off that they've been applying all the things you'd expect when you're doing an enterprise product where you can't have regressions another side one more here perspective hopefully it changes your perspective what are two respects because I have a much better idea of how other parts of the industry handle quality control what do you understand for example how car manufacturers handle quality control the software industry looks really embarrassing there is a lot to learn in the software industry I think in that way and partly from other businesses so I guess in that way okay let's go quickly over to the other side how much time have we got left or are we over running what was that sorry okay so a couple of questions this side I can't say anything about the scope trial I'm sorry last question can someone help me are we done