 I don't know, let's just get started. It's just about, yeah, why not? OK, so I'm going to talk about the embedded problem that I've been experiencing at Intel for the past few years working on the Intel phones. So Intel has multiple groups working on Linux and multiple groups working on Android. So I've been working on Android for the Intel phones that are validated and go through regulatory certification and have actually shipped, not necessarily in large volumes, but have gone through the process. So this is an approximate outline of what I'm hoping to talk about, introduce myself and then introduce the topic, go through the history. The environment that we live in with respect to this embedded problem and try to bring up what are the key questions I should be trying to ask and then go and drop into some of the techniques that I've been using to try to address the problem with various levels of success. And then I want to end with the proposal of keeping drivers that are not upstreamed or aren't ready for even consideration for upstreaming. Keep them out. I'm proposing that to keep those things out of the kernel tree altogether for various reasons. So I'm recommending out of tree modules. So a little bit about myself. I've been working on Linux for 10, 12-ish, 13 years, something like that. And Android since 2008. So my title is the Android Kernel Architect. And that means I have some flexibility in picking and choosing what problems I feel are important and a little bit of leverage in getting managers who control headcount to prioritize some of these problems where otherwise I wouldn't get anything done. So the things that I'm interested in working on these days is I really think I have a chance of addressing some of the embedded problem, which I'll define in the next slide. And the other things that I'm working on include the scaling problem, which I talked about on Monday at the Android Builder Summit, which is how do we support many target devices within one build without collapsing our continuous integration process under the gravity of hard drives. Try to get ahead of transitions that we see coming, GPUs, just transitions that like moving to ACPI-5 from SFI or maybe even UEFI group associated with the phones. I also get involved with some, I do code reviews. Briefly, I should, I left it, I messed it in the slides. But the way our organization is set up is we have this notion of feature teams. And feature teams are basically responsible for specific use cases. So for example, we have a wireless feature team that's responsible for the regulated spectrum. We have a Wi-Fi feature team for the unregulated spectrum. We have an audio, we have a media, we have graphics, we have video. These are all, we have a power management and a thermal feature team. So each one of these feature teams, they're responsible for the use cases from top to bottom on the stack. So these guys come in and they have ability to modify the kernel in the continuous integration process without my blessing. And so what we do is I try to do code reviews on it, what all these guys are doing. But it's hard to keep up. So I need to do more code reviews. Another thing that I'm involved with is trying to mentor and teach the organization how to use the tools and some of the grease the skids for the next kernel migration and try to improve things. And another thing I deal with is unplanned surprises, like the CES demo of Baytrail running Android, which wasn't a planned thing to deliver for 2013. It's all of a sudden now we need to do Android on Baytrail for 2013. So it's like, ah. So that's basically me. Excuse me. So what is the embedded problem? Basically, the embedded problem is when it takes significantly longer to harden a kernel within a software stack on a device, then it takes for the Linux kernel community to go to new kernels. It becomes virtually impossible to upstream. And there gets to be a large investment in inertia associated with staying on the older kernel. So the embedded problem is how can we migrate to new kernels? And also, missing from this, it also becomes you stay in a kernel for a long time, it's hard to migrate to a new kernel. It's a lot of work and a lot of fear. OK, so what makes it take so long in the first place? Well, we're talking about new hardware. I'm dealing with phones and tablets in this context. And so you've got new boards, you've got new silicon, you've got new boards, you've got new drivers, new use cases. I mean, the camera, somebody needs a new use case for rapid fire or picture taking or something like that. And then that results in significant code flux. And everything builds from bottom up. So you've got the hardware, then you've got the firmware, has to be good enough so you can run a kernel. And the kernel has to be good enough so it can load the base OS and run that. And the HAL can sort of work. The middleware depends on the HAL and the kernel, and then the OS stack. Finally, you can test your use cases. And then you start testing, and then you find bugs. Yeah, and so bugs are found late. And that's just the way it is. I'm not telling anyone here anything new, but that's sometimes needs to be spelled out a little bit. So I want to get this over right away. For us, we only have one motivation for moving to a new kernel fundamentally. And we have customers that expect modern kernels before. They expect the kernel that they go to manufacturing with to be not older than a year and a half old when they start their high-volume manufacturing. So this means that the 3.0 kernel is kind of too old today. So we need to be under 3.4. And in the space I work in, the kernels that matter really are the long-term support kernels. So we're basically tracking that. And we're looking forward to getting ready for the 3.9 kernel, which right now that's the anticipated next long-term support. Because if it's not 3.9, then it's going to push out to somewhere like late July or August. So we kind of expect it to be 3.9. And the other thing, one customer will push for one product. But because our driver and integration teams and our continuous integration process is kind of cumbersome when it comes to branches and multiple kernel versions, the teams can't handle it. And so if we have to move for one product or one target, everyone has to move to a new kernel. And unless the device is in maintenance mode, then we don't move it. So I'm just going to go through here. This is kind of a rough history of what I've been involved with. So back in, I don't know what year it was anymore. But back in the 2.6.1, 32 days, we started Android on Moorstown. And basically, we forked off the Moblin kernel at that point because they're doing Moblin at that time. So we took the Moblin kernel and we put the AOSP Common Android patches on top of that baseline. And then we started hardening that for Moorstown. We did do one production product with Moorstown, the Cisco CS thing. And then Medfield showed up. So Medfield is the next generation ship. That's one of the current ones that are available today. And at that time, Moblin became Miga. And so I did the exact same maneuver. I took the Miga kernel. And then I put the 2.6.35 Android patches. Actually, I cherry picked the 2.6.35 Android patches on top of the Miga kernel. And we stayed with that for Froyo and Gingerbread. And then when ICS came out, we moved to the 3.0 kernel. And there was about, I don't know, about 4,000 some patches in each one of these kernels. Over time, it's just astounding how many changes keep getting put into these kernels on hardened branches. I mean, you would think they would finish and just stop changing things. But the more they test, the more they find bugs. The longer you stay on a kernel, the more feature creep will happen to that particular kernel. So there tends to be a lot of thrash. Actually, camera is still a pretty active area where new features and use cases are driving significant code flux for us. So anyway, for the 3.0 kernel, we had Medfield Clover Trail and an SOC. I'm not sure I can say in public, but everybody knows its name. And then there's currently we're migrating to 3.4. About this SOC that everyone knows its name. We put it into the 3.0 kernel when it was still virtual. So at Intel, we have this shift left thing where we try to bring up kernels and drivers on the virtual platform and emulators and whatnot. And so we have a virtual and a virtual hybrid target associated with SOC 1 in the 3.0 kernel. And this thing hasn't just recently powered on for real. I think last quarter. So I believe we allowed that code to get into the integration kernel too soon. And that's one of the lessons that I didn't list at the end of the talk. But it's something you should keep in mind. You should have entry criteria associated with allowing stuff into your continuous integration and your hardening branches. And it should include that the device is testable and real as opposed to virtual. OK, so then anyway, so now we're on to 3.4. And this is good. This is targeted for jelly bean and the kale dessert. And possibly the L dessert depending on when it comes out. But we've dropped Moorstown, as you can see. And so it's Maryfield, Clover Trail, the Surprise Bay Trail, SOC 1. And probably another couple of SOCs will be hosted in this particular kernel before the end of this year, actually. So that's kind of a history of the kernels. And so this is kind of the, I'm going to try to explain a little bit about the environment that we're living in associated with this embedded problem on the SOCs. And so the way the device hardening happens is we have a single branch. We get 1,000 people beating on it. And we get a lot of testing on it. And everything is merged and done in a continuous integration fashion. We use BuildBot for our continuous integration because the team that we acquired from FreeScale in Toulouse is our integration team. And there's some BuildBot maintainers there. So they got the pick. So we're using BuildBot, which isn't all that bad. But so the way it works is an engineer will do their development on their local workstation. They'll do their local testing. And then they'll upload it to Garrett using the Android Garrett thing. And then BuildBot will detect the upload. It will do a lint process to do a check patch on it and look for other things that are fairly easy to do. And then it will even do a compile test and an automated smoke test. And then this is all before the code review is actually done. And then so after the linting of the change, then it goes through a code review process. And it needs to get a plus two from the feature team maintainer, which I think is a conflict of interest that I recommend against allowing, and usually a domain expert of some sort. So it used to be that if you made a change to the kernel, that change could not be merged unless I gave it a plus two on the review. But I became the sole bottleneck for 500 people making kernel changes that it became impractical. And so we added more testing. So in practice, we see about 50 to 100 changes per week. It's fairly constant. It's always like 50 to 100. And it goes on for about two years. One to two years, we keep beating on it. And during this time, you'll find new features sneaking in, like camera features or wide eye. That was a big party getting that integrated with Android. And also, we have about between 30 and 80 non-upstream drivers associated with each one of these targets. About half those drivers are third-party reference drivers that are just kind of hacked into place, made to work. And we also make some changes to the core kernel and also code that is already upstream. So like some upstream drivers will tweak. The management really doesn't care about upstream at all. And actually, there is some bias against it because we had a hard time with the way Amiga worked out. And so the management and program management kind of point to that a little bit and go look what upstream first did for you. So I have to deal with that. But there's a little prejudice there, but not a lot. But mostly, there's projects really aren't planned with planning for the software effort for new kernels. When you start a project, you assume a particular kernel. And when you map out the project planning, they tend to not even account for, oh, yeah, we're going to need to do a kernel rev somewhere in the middle of this project. It's not on the Gantt charts. A lot of times, actually most of the times. So moving a new kernel was only done under duress. And we get that from the customer. So this is more of the same. The attitude for new kernels, actually, that the managers of these feature teams, they're not paid. They're measured on fixing bugs and adding features. They're not measured. They don't get paid for upstream goodness. And in fact, many feature teams, I've heard a few feature teams working. Tell me, look, we're not allowed to, if we want to spend time on any new kernel stuff, we have to do it on our own nickel or on our own time. And which is really too bad. But to be fair, they really aren't, it's somewhat irresponsible to let these feature teams upstream kernel code themselves. Because they may have time to do the upstream work today. But next week, they will not. They'll be working on the next shiny thing, or next version of whatever feature they're working on. And they won't be able to effectively address interacting with the community. So we need to have that activity is falling into responsibility of the kernel feature team. So we've created a kernel feature team to deal with this. Let's see, this is more the same. I don't know if we need to say too much about this. The quality side effects of getting your code reviewed by upstream people is not seen as compelling with respect to the time to market pressures. And also, the upstream kernel isn't really, that code isn't getting tested. Where the old code is getting tested fairly significantly. And the motivation for doing upstream work at all is really to reduce the time to market for the kernel transitions. So it becomes a second order driver. And it's a significant enough driver to give me headcount to support this activity. So it's not like it's blown off and it's completely ignored. But it's not the feature team's responsibility. They don't. It's not part of their embo's management by objective. So the other problem is moving to a new kernel takes really longer than it should. We have an old, moldy old kernel with 4,000-ish patches to it. It would be nice to be able to move to a new kernel and be able to get the new kernel and have that transition done in such a way that we could start getting all the feature teams using a new kernel within a couple of weeks. And it's not the way it works. It takes a couple months, it seems like. Part of the reason why it takes so long is because the kernel feature team that's doing the work associated with migrating to a new kernel, we really want to clean up some of the sins of the previous kernels. And so we spent extra time cleaning up the early board power on and the board files and the power management implementations so that they're better than what they were before. But because we take time to do that, there's another 1,200 changes that sneak into the old kernel version underneath while we're working. And it becomes pretty hard to account for the bug fixes. And one of the biggest problems that the management has with moving to new kernels is this accounting problem or the auditing problem, as I call it here. And this is where we've invested millions of dollars in testing time, actually tens of millions of dollars in testing resources to test this old kernel. And then we're moving to a new kernel. Is there any way we can quantify how many bugs are still fixed in the new kernel that were fixed in the old kernel? Are all the features there? And there really isn't a good way today to do that. And this is one of the problems I'm trying to solve this year. Let's deal with that. And another point is, so on the 3.4 kernel, 3.4 came out last late May, early June time frame. And we had the 3.4 kernel booting on our stack in August. Actually, in late July. And I was planning that, in my mind, it was a slam dunk and obvious we were going to move to the 3.4 kernel in September. We would have it done and we would harden it. It'd be all good by the end of October, November. But the program managers, they have other projects that are in flight that are dependent that doing that maneuver at that time would put their alpha milestones in jeopardy. And so everything was pushed out. And so that just, this has made the 3.4 activity drag on too long and waste resources, in my opinion. We would have been better to wait, even starting the 3.4 work on the kernel feature team until there was a hard agreement on when the organization would migrate. But I made an assumption. And that's what you get when you make assumptions sometimes. So I'll try not to do that next time. Some of the problems with kernel migration that we've had is we have 1,000 patches to old baseline. We have back ports intertwined in with these thousands of patches. And it's all single-threaded branch development that's really pretty much impossible to rebase. It's taking git and making it CVS, basically. And that's what you get with these integration trees, basically, and continuous integration. It's just kind of a side effect of using continuous integration prep methodologies, really. And you'll have patches that change both new driver code that was added to the kernel and code that was files that were existing in the upstream kernel. And most of those changes have a high risk of merge conflicts when you get updates. Driver teams don't care. We know that. And they already said it takes two months. After. So it takes two months for the kernel feature team to get a kernel ready for the rest of the organization to start working on. And then when the rest of the organization finally jumps on board, it takes them another one to two months to get to recover back to an alpha or beta quality that is actually less lower than what they had before. So it takes about two months to recover from this stuff. And those two months are spent basically involving the entire organization at this point. So now it's not just the feature team working. It's hundreds of, it's not the kernel feature team doing it. Now we're talking real money. I've already said these. Yeah, we have software stack. Did I talk about test blockers yet? Let's see. No, that's later. Another problem, from program management point of view. Oh, yeah, there's test blockers. Yeah, so another thing that just drives me nuts is, so you're moving to a new kernel. It's important to be able to build your OS stack on top of that new kernel. And for the 3.0 migration, it used to be that a whole bunch of user mode components would not compile if I dropped a new kernel into the tree. And so I made a big stink about that. And so on the 3.4 kernel, that's fairly cleaned up. I didn't have any trouble with it on the 3.4. But from 2.635 to 3.4, a lot of core subsystems would not be even compiled when I dropped the 3.0 kernel into the build because they were assuming locations for header files and they were reaching into the kernel tree for include files. And it was just so bad. And that got cleaned up. But still today for the 3.4 kernel, we'll have certain drivers, usually it's graphics, but they'll change a header file and then it'll either break the build or it'll break the runtime. And so you won't be able to test. And we'll end up with test blockers on the new kernel. And that makes it even harder to migrate to a new kernel. So we want to try to avoid the test blockers. Let's see. So put it another way. My work environment is we've got a single threaded, single branched tree, and it's not manageable. Our hardware really can't even boot on an upstream, boot enough on the upstream kernel to make upstream kernel contributions. And moving to a new kernel means potentially retesting thousands of bugzilla reports. And most of the workforce isn't really concerned about this. They're focused on the current product, not the next kernel. Yeah, so here we are. We have customers saying you've got to come out with a once-year hardened long-term support kernel. I think it's supposed to come out. I think after this experience with our current 3.4, it's going to be planned rollout in Q4, which I think will be great, just to have it planned and stuck to. That's supposed to ad hoc and do it when you're forced. We have an organization that has difficulty working multiple currents concurrently. And upstream first isn't happening. And our ability upstream is limited. Jeez, I've got 15 minutes, huh? OK. Everyone knows how Linux is developed roughly. So there's decentralized development. We've got many of them. Everyone has got their own tracking branch. And when you're ready, you rebase your work onto the current tips, and then you send a pull request. And then the subsystem maintainer, you sent the pull request to, pulls it, does some code review, maybe does a kappa review. And then he sends a pull request to line us somewhere sometime in the future. And the cycle time for doing for this happens on about a one-week schedule. You can turn on new RSCs about once a week. And so it's not really amenable. This sort of process of having a single kernel maintainer kind of conflicts a little bit with continuous integration. Because once a week is too slow. Also, Linux kernel is volunteer-based testing on the master branch. And limited serious organized testing actually happens. And the hardening really gets done on the stable branch. So the key questions to ask is, for me, the key questions that are going through my mind, are we just looking at this wrong? How can we use kernel versioning more effectively with respect to new features? For example, rather than backporting a big chunk of video for Linux or some other kernel feature, maybe we should just wait and have an intercept for that new feature with a new kernel. We should be thinking a little bit more about backports and how should we do the backport or should we just go to a new kernel? And how can we better integrate multiple kernels into our continuous integration process, which is something that's what I'm working on, one of the things I'm working on now. Because if I can unblock concurrent kernel, have a next kernel integrated with the continuous integration process and getting tested as we go, then by the we can get a new kernel brought up and develop confidence in it months before we do the switchover. The other thing is, how can I isolate concurrent kernel development activity from the driver integration teams who can't deal with it? Is there a way I can emulate tracking branches such that the feature teams don't have to deal with it and only the kernel people who know how to deal with tracking branches can deal with the tracking branches. How can I isolate the core? Can we isolate changes made to core kernel from new kernel additions? And can we isolate backports? And can we just reduce the time it takes to move to new kernels? So now I'm going to talk about some of the techniques and some of the things. Also, you want to have a prioritization of what you want upstream and what upstream work you do. I want to try to disallow having changes go into the tree that in one commit you touch both upstream code and non-upstream code because those changes are hard to deal with with respect to merge conflicts. You want to try to use merge commits for backports. So whenever you're doing a backport, you don't cherry pick the backport into the main tree. You cherry pick it into a branch, and then you do a merge commit of the backport from the backport branch. That way when you move to a new kernel version, you just drop the backport branch. And you don't have to deal with it. You probably should consider having only allowing the kernel tree touch code that is non-driver, that is existing code in the kernel, letting the feature teams and the driver teams and the integration team mess with that code is leads to undocumented changes that are hard to migrate and done in a questionable manner sometimes. You want to make changes to any header files painful. When someone changes a header file, that is a, from my perspective, you change a header file, you're changing an interface specification, and that is the root cause of most of the test blockers when you're moving to a new kernel. So if you're in the middle of bringing up a new kernel and someone changes a header file on the older kernel for some reason so that they can add an i-Octl or something, that will result in, has a high probability of becoming a test blocker for the new kernel. And you don't want test blockers on your new kernel that's under development. You want to demand that the user mode, the whole stack builds independently of the kernel so you can drop in a new kernel and not have build time troubles. As best you can, you want to try to isolate sources of code flux from the kernel tree if you can. And this means moving drivers that aren't ready for upstream or not even non-upstream drivers that the different feature teams are just hacking away at. Make them hack away at their driver outside of your kernel tree as much as you can. I know that a lot of people don't like external driver modules because it's more files to deal with. It's nicer to have a single model with a kernel and everything built into it. But it's just such a problem having thousands of changes to your ISP driver or something and trying to migrate that to a new kernel and being able to say, oh, yeah, I have all the bug fixes and all the features from the old one. It becomes hard if it's all in the same tree. If you put that in a separate tree, you can do it more reasonably. OK, a little more detail on those things and what to do. So from my perspective, my upstream goals are I want the upstream kernel to be able to boot my platforms to at least a working RAM disk with a serial RAM disk console. I want to have working P and C states and suspender RAM, working DDFS and low power idle states and suspender RAM. I want those working. I also want to have the persistent storage working so that I can actually put an OS image on the EMMC and do the next step. And then I also want a working USB gadget because I'm dealing with an Android stack here. And you can't get out of the blocks with the feature teams unless they can use ADB. So that is what I want upstream at this point. I don't care about little 200 line drivers. Those don't help me. This would help me. So this is the priority for right now what we want upstreamed. And so that's where we're going to put effort. Like I said earlier, you want to prevent people from touching in the same commit, touching both upstream and non-upstream code because that results in merge conflicts. I didn't mention it earlier. I'll mention it later in one of the slides. It's kind of a graph. But in our process, what we do is we have periodically we merge in from Linux stable. And if your driver has to touch an upstream file and a non-upstream file in the same commit and there's a merge conflict, it makes it hard for me to revert that change so I can get my merge done. So I want to revert the conflicting problem one, is what normally I like to do, and then fix it up later after it. And touching code that doesn't belong to you, it's not terribly cool unless you upstream it and you know what you're doing. And if you have to touch it, then that's another maintenance point that you need to account for. And if you do it in an ad hoc manner, that results in something, a high probability of a surprise when you go to a new kernel. Somebody will say, oh, this, my iOctl doesn't work. Why didn't you port my iOctl over? Well, it's because I didn't know you had it there because you hid it in this haystack. Turns out it's not as easy as I hope to prevent this. We put in a made a modification to check patch to check to see if the patch touched a file that was part of the original baseline that we started with. And if it was touching that and a file that wasn't in that set, it would throw up an error. And that caused problems with respect to some energy management drivers that needed to modify header files that were common. But we really need to change the integration process so I can at least have a warning and have a process where that violating this policy can be approved conditionally. But like I said, we're using BuildBot for our merging. And it's actually controlled by BuildBot. So I need to work with the BuildBot guys to add the flexibility to treat check patch warnings in a manner that allows us to, on a case by case basis, acknowledge the warning and ignore it. Back ports, this is something that worked good for me this year. So in the 3.0 kernel, in the 335 kernel we had, we had so many terrible back ports, it was bad. And so for the 3.4 kernel, the way we're doing it now is if you have to do a back port, you have to do the back port, ask the kernel feature team to merge this back port into a special back port branch that had a common baseline with the main branch. And then from there, outside of the BuildBot continuous integration process, it gets merged into the mainline tree. And this way, all the back ports are done on merge commits, as opposed to inlined with the rest of the stuff. And it really cleans up the history of a significant amount. If you got a lot of back ports, I encourage this methodology. Here's the branch. So this is what we tried to implement for 3.0. And everything here is actually correct, except for this upstream branch. That's the second one from the top. So I'll just go through these things real quick. The main branch at the top, that's the main integration branch. That's the one that gets all the testing attention, all the developer attention. The next one was supposed to be changes to upstream files. And they were supposed to be segregated from the rest of the main branch, so that any changes to this Origin 3.0 upstream were candidates for changes that should be submitted upstream to the Linux kernel. And then, of course, the back ports. I just told you about the back ports. But in practice, this upstream one, it didn't work as well as I hoped. And I'm going to come back this year, and I'm going to try it again. But the back port, that worked great. I recommend that. So the way it works is, when we started the 3.0 branch tree, when we started with the Linux stable tree and the Android 3.0 branch, we merged those together, and that created our baseline. And so everything has a common origin, common ancestor, in the baseline. Then we created our back ports, and that's where we do our back ports. For example, I back ported the Dave Howells sign driver modules things last month. And I did it in the back ports. And we have some video for Linux stuff back ported in there. And recently, I did the ACPI back port for 3.4 back port branch, not 3.1. I didn't go that far. And then, so you do your changes. If you're not doing, if you're changing your own driver files, do it in the main branch. But if you're doing a back port, you have to do it special and go through a different process. And then I really want to do the same thing and enforce it for changes to the upstream branch this year. Hopefully, it'll do better. Yeah, so you want to make it hard for feature teams to change upstream code. So whenever there's everything's a nail to these guys, if they're kernel people doing the work, they want to change the kernel. If they're user mode people, they want to change user mode. They'll make changes where they're more comfortable, rather than knowing a more reasonable, a more correct way of doing the change. There's no one right way to do it. So I don't want to say the right way to do it. But there are better ways to do it. But people are biased by their skills and their aptitudes and their experience. So if a feature team needs to add something, sometimes they'll try to hack something crazy into the kernel because they don't want to have to do it in user mode or vice versa. I've seen terrible hacks get pushed into the kernel that should have been done in user mode. And I've seen the exact same thing done in user mode and said, jeez, why didn't you do that in the kernel? And he could do that different. And so anyway, to try to address half of that problem, you want to make it harder for future teams to touch upstream code. So let them go nuts on their stuff, but don't let them be screwing around with upstream unless you can avoid it. You want to make header changes expensive because changes to a header is a change in an interface specification. And every time I have a test blocking issue moving to the kernel, it's because some joker changed a header file. So you want to make header files cost more. You want to make sure that user mode doesn't give you any. Your build doesn't give you any test blockers. So make sure your user mode can build with whatever kernel you feel like. Basically, it's a pain. Let's see. And then running out of time, so I'm sort of rushing. I really wanted to make a pitch for this out-of-tree driver thing. So the auditing problem, accounting for all the bugzillas that were fixed or addressed on the old drivers and making sure those are still addressed on the new driver, is partially addressed if we can have the drivers developed in their own out-of kernel Git tree. And then inside that Git tree, the kernel feature team can maintain tracking branches of whatever fix-up needs to happen to the driver so it works on the new kernel. And so with that sort of model, at any given time, the driver for the new kernel has all the changes associated with the old kernel in there by definition. And all you have to do is put the fix-up code on top of it to make sure it works with the new kernel. And I really think that has a chance of addressing a significant part of the auditing problem. And it also gets a lot of the code flux out of my kernel Git project, which is a twofer, in my opinion. And it also solves the problem of having driver teams that really can't deal with multiple branches and dealing with Git rebays. From actually having to know it and deal with it. So I don't have to ask them to port their driver. I just have to fix my tracking branch for each one and go. And they don't have to know about it, which I think would be good. So in practice, having separate Git projects is really kind of analogous. It's functionally equivalent to having tracking branches and then doing the rebases. I'm just turning it around. Rather than rebasing the drivers on top of the new kernel, I'm rebasing the new kernel glue logic to make the old driver work on the new kernel. So I'm trying to turn the problem around a little bit. And by doing that, I believe I can avoid having the future teams have to be competent enough to deal with Git rebays and test multiple kernels. We can isolate that work to just the kernel future team. And I mentioned the auditing problem. And so in theory, if all these things work, I can reduce my time to market. So in summary, we're pretty close to out of time. Upstream first doesn't really work for hardened devices. You need to prioritize your upstream efforts based on time to market and reducing cost of kernel migration. You need to avoid changes that touch both upstream and non-upstream code concurrently, at least in the same patch. Try to use merge commits on backboards. You try to use merge commits for changes to core kernel code or upstream code so that you can have more natural auditing of what needs to be pulled forward. Make sure that user mode always builds independently of the kernel version because you don't want test blockers based on compiling. And you don't want test blockers based on header files either. So make header file changes painful or expensive. And try to isolate non-upstream drivers from the main kernel tree into different Git projects. And this is actually kind of an experiment. I really think it's going to work, but we'll see this next year how it shapes up. OK, sorry I was talking. There was a lot to say. I wanted to say. What's that? I like it when you talk. Oh, do you? Yeah. That's why you're here. Oh, yeah. So we got how much? We got probably maybe five minutes for quite. We got time for maybe one or two questions. Then we got to let the other guy in. Well, then it moves into. Then we move it in. It goes then. Actually, it'll move in as a backboard. If let's say a camera driver. A camera driver, they upstream their camera driver to 2,638 or something. Then what I'll do is I'll backport 2,638 into my backport branch, backport that driver into a backport branch in my 3.0, I mean 3.4. And then you add features. And then it gets. I know, yeah. I got the stop sign. I don't know. You're right. You're right. Don't mind me not just trying to beat the rush. Yeah, yeah, yeah. OK. Sorry. That is a problem. But fortunately or unfortunately, feature teams don't get paid for upstreaming. So I'm not sure what that problem's going to become terribly important. Yeah? The feature teams are. Yeah, each feature team has a responsibility for doing their. I don't actually get to levy requirements on the feature teams. The feature teams have a requirement to do their own test automation and test cases. But I don't have. No, I really can't run their team. I can run the integration tests, the automated tests set up by the main integration team. But the individual feature teams, they have their own test suites and their own test harnesses and environments. Some of them, a lot of them require specialized equipment to run their tests anyway. So. I don't need to take any additional questions about the law. OK, OK. Got it.