 Hi, I'm Andrew. This is Dylan. We both are software engineers at Google. We work on the production kernel theme. I've been at Google for about a year. And Dylan, do you want to say anything about yourself? I've been at Google for about two years now, just barely. Yeah, so we're going to talk about a new model for maintaining kernels at Google that we hope will eventually be realized as people at Google contributing more to upstream instead of sort of just focusing on internal efforts. So briefly, what we're going to do is we'll talk about Prada Kernel, which is Google's production kernel. Give a bit of an overview about it. Talk about how currently it detracts from our ability to participate upstream. And after that, Dylan will introduce Icebreaker, a new model for maintaining kernels at Google. We'll talk about some of the risks and then we'll talk about its current status and how we hope it'll turn out in the future. Prada Kernel, it's Google's, it's our name for Google's production kernel. It runs on most of the machines that we run our workloads on in our data centers. And it's a fork of the Linux kernel that Google deploys on its production machines. These are machines that sort of serve Google's like search requests and other jobs both externally facing and internally facing. It consists of about 9,000 patches on top of upstream Linux. And this isn't upstream in the sense of like 5.14 today. It's actually a little bit older than that. So it's a fork that isn't continually like kept up to date with upstream. It's comprised of some internal APIs. We have a multi-threading library called Google Fibers and it requires some kernel side modifications to implement it. Hardware support for various things, performance optimizations, and then other sort of tweaks that are needed to run binaries that we use at Google. Every two years, we rebase all these patches over two-year code-based delta. As you can imagine, that presents some challenges, some of them being that a lot of stuff can change over two years. And then even if you do think you successfully rebase your patch, if later you find a bug in it, that's a very large search space to sort of narrow down and try to figure out what's going on and how you can fix it. So you might think, why do we have prod kernel? Why don't we just use the upstream kernel? It's supposed to be applicable to many different use cases and if we have a new use case, we should just be able to upstream all of it and go from there. But we had some internal needs and timelines that NSS has hated having our own fork of Linux. This list isn't really meant to be exhaustive, it's just kind of illustrating some examples. Internally, there's different classes of network traffic within our data centers. There's some specific things we need to do to set quality of service for different packets from user space. We have specific rules for doing OOM kills, for jobs running in our data center. We added a new API to enable cooperative scheduling in user space, which I don't think it's something that exists in upstream Linux today. And there's also security concerns that we have since we handle user data. One such example is that we disable a sampling of the user stack in the Perf tool because if somehow we were able to get a little bit of user data, that wouldn't be the greatest thing in the world, though it sort of resolves a privacy concern. So prod kernel, the general idea that we wanna talk about is how it detracts from upstream participation and then how we want to fix these things so it's easier for our engineers to participate in upstream. So Google made features are developed and tested in prod kernel, which at its max can be probably two years behind upstream. So this produces two major hurdles. So if you're thinking about the workflow of an engineer, they get a request, a feature request to add a feature to prod kernel. Can do many things, but generally what'll happen is they'll develop their feature on prod kernel, which is two years or so behind upstream. Then it's a choice, right? You can either decide to put in the effort to rebase it forward across many years and send it to upstream. But this presents an interesting challenge, right? Like even if it was tested on prod kernel against lots of Google traffic and you're confident that it works in prod kernel, once you rebase it, it doesn't have all that testing and a qualification backed up with it, right? Cause it's combined with like a totally new source of a totally new set of source code. So while the feature might have been validated against Google production workloads on the old upstream base, it begs the question, how do we replicate that on a new upstream base without the rest of prod kernel? So you might say like, oh, just throw that kernel in production with that one specific feature, but it's missing all the other specific features that we need to run our workloads. So this can kind of be generalized to bug discovery. So we operate complex workloads at a very large scale. This sets us up to discover lots of system bottlenecks, bugs and deadlocks. The nature of our rebase means that there's a really large delay in discovery. So we're like a few years behind, we discover the bug and you might say, oh, we can report it upstream and that does work, I guess. If it's like we're on a LTS kernel, we can report the bug and maybe there's a fix that someone contributed that got back ported. But in general, we're probably kind of on our own if we find this bug. And then it's not really useful if we find a way to fix the bug, right? So we fix the bug on a year's old kernel. It's debatable whether someone actually cares about that fix anymore, right? Like the ideal situation would have been we found a bug on an upstream kernel we're able to contribute a fix for that. But it doesn't really work out that way by nature of how we actually do development currently. So we're like n majors behind upstream, which means that the bug we find is also x years old and we're x years late to presenting the issue to upstream. So by staying on a certain major for a pretty long period of time, a bug could be fixed upstream but we won't benefit again until we rebase again. So I can't think of any examples off the top of my head but a bug could have been fixed in like the most recent release, right? And it could be very well applicable to us but we won't really get that bug fix for free until we either rebase again or we back port the bug fix which is basically like the same issue as do we rebase our features to upstream them across the same sort of delta. So another issue that we kind of see is platform support and back porting. So as upstream adds new supports for platforms, it's obviously something we might be interested in, right? So if there's new platform support in say 5.15 we need to back port all the patches to our prod kernel in order to actually get that platform support. And it sort of runs into the same issue except sort of reverse, right? So we know that these patches are tested on say 5.14 and it's validated by the upstream community. We back port it and who's to say that the patches still work. They probably apply cleanly. We can probably do some tests but it's not really the same guarantee since you've sort of moved the base out from under it. Yeah, and then we need to back port over a large delta and even then it doesn't even be, it's not ended up tested against the same version of the kernel. So the bugs we encounter might not even be applicable to upstream. This brings us to the prod kernel rebase. So rebasing is extremely costly. It detracts from work that engineers could be doing otherwise because they have to spend pretty big amount of their time rebasing over such a large delta. Individual patches have to have their conflicts resolved against the new upstream base. So if there's a new API that's introduced or a function signature changes, the engineer has to go hunt that down and figure out how to integrate that into their existing change. And this is kind of magnified because you might be assigned to rebase a feature that you only wrote like a year ago and it's been working fine for a year and then all of a sudden you're thrown back into this code base that you probably only have a vague memory of and suddenly you need to resolve conflicts and bugs against it. And this isn't like a one-time issue, right? It's recurring over and over again. So the entire kernel must be requalified against Google's workloads with the new base and you can imagine that any bug that you do find is probably relatively unexpected, right? We all think we write perfect code the first time around. So when you do find a bug, you wonder, okay, so where does it come from? And this leads into sort of two avenues. It can either be in the rebase itself or it could have been in some potentially bad interaction with the new base. But either way, the search space is very, very large unless you have like a very good reproducer. It's a bit hard to bisect. You could imagine that if you had a patch series that conflicted on top of the new base for every single bisection, you still need to resolve those same conflicts and maybe with some clever scripting you could get around that, but it's still like a difficult workflow. Dependencies between patches are inconsistently documented. So it's very well the case that we have, I can have many efforts at Google that depend on each other. So we could have written a library that all these other patch series patches depend on. But it's not very clearly documented what depends on what. So usually the only way someone finds a dependency is that they try to rebase the patch series and they notice, oh, I'm missing this certain function or I'm getting this, it doesn't compile and then you have to go hunt down what other patches your patch series depends on. And in addition to having to hunt this down, the lack of documentation for this makes it hard to parallelize the efforts. So if you need to find all your dependencies, even if you do find them, it's unclear what those patches also depend on and it becomes a difficult task to sort of streamline. And then the delay associated with revases worsens like all these issues before, right? Every single time it sort of piles on more and more. Sort of the idea of technical debt, right? If you don't take care of it, it just kind of grows. So all these things kind of magnify and increase the cost of the next rebase. So we've seen that each subsequent proc, each subsequent prod kernel rebase increases in the number of patches that we rebase and the time to complete each rebase. So what this means is every single time we rebase prod kernel, we have more patches to rebase and naturally it takes more time to rebase. So if we kind of extrapolate into the future, we don't really see like a clear trend-liner bound on how long this process will take. Rather than see when this trend line becomes unsustainable, we want to proactively reduce our technical debt. So we need to find a way that reduces our technical debt by some new, I guess, model of developing and maintaining our fork of Linux internally. And effectively, our hunch is that by reducing our technical debt, we'll be able to have more time for engineer, suspend, participating upstream. So not only is the upward trend in patches needing to be rebased increasing itself, it also has a hidden cost of every engineer that takes like an hour to rebase a patch. That's an hour they weren't spent doing something upstream. So it presents a few hurdles for engineers to work upstream. Structurally, they're working on a year's old kernel and like we talked about this before, this presents the hurdle of testing on a new upstream base, right? So you have your feature that's tested and qualified against Google workloads on a different kernel base, you rebase the feature, send it for upstream and it would be a lie to say, yes, it's been tested against Google production traffic because it really hasn't, you rebase the patch series and it's on a new base. And then more practically, time is taken from an engineer's finite resource to participate in rebasing instead of upstreaming. So it's kind of like we need the step to rebase a prod kernel and that effort could have very well been spent on rebasing your patch series to send upstream. And I'll turn it over to Dylan to talk about our new model for sort of maintaining a more upstream friendly kernel. Thank you. So what is our plan for paying down this technical debt? So Icebreaker is a new kernel project at Google and it has two main goals. The first one is we wanna stay close to upstream. We wanna release an Icebreaker kernel on every upstream major release of Linux. And we wanna do that sort of on time, right? We wanna stay caught up with upstream Linux. And by doing that, we want to help Googlers participate in upstream much more easily, right? They sort of have this platform now to do upstream first development. But then in addition to that first requirement, we wanna be able to run arbitrary Google binaries in production, right? We want this kernel to sort of be a real production kernel. And the idea is that we can take the qualification of upstream patches out of the critical patched path for a prod kernel rebase before it was sort of like we had to do a prod kernel rebase to start even testing out these patches in production. And so now we're sort of piling everything into the tail end of this two year period. Whereas with Icebreaker, we could be doing it much sooner. We could be doing it as soon as the kernel gets released almost. So these goals are important because it really just boils down to we need better participation upstream. As an engineer working on a fork that's super far from upstream, it's really difficult to sort of see the benefit of sending things upstream, right? Your things go into production on a really old prod kernel, and there's a whole extra set of things that you need to do to get it to go upstream. You need to first rebase your feature and untangle it from prod kernel. You need to test it and make sure it works upstream. And then you need to propose it upstream and you're not even sure if it's gonna get accepted, right? Versus there's sort of, you could just wait for the rebase to happen, right? And then you have time budgeted for you by your team to sort of get the rebase done and then the actual upstreaming of the patches never happens because by the time that rebased prod kernel is qualified, it's already too late and it's already too old. So the other sort of side of this is we want the opportunity to qualify these upstream patches in production, right? Because sort of loading all of these, the hundreds and hundreds and hundreds of upstream patches into prod kernel all at once is really what we're doing. We're sort of taking our patches and putting them somewhere else and we don't really know what's happening. And there's not really an opportunity for us to sort of amortize that load of new patches coming in from upstream. So what does Icebreaker actually look like? And there's sort of two sides to this. There's the side of development, like how do I get my patches into Icebreaker and deployed in production on an Icebreaker kernel? And there's the side of how do I upgrade my patches or get them onto the new Linux upstream base? So it sort of makes sense to start with saying, okay, well, we're gonna fork off our upstream base. Let's say the latest is 5.10 and from that we fork off feature branches. So these feature branches are really just a collection of feature patches that sort of make sense in the context of each other and they implement one sort of piece of functionality in the kernel. And so you really just have upstream Linux, the feature patches and nothing else. And so this is useful because, right there, that's sort of a patch series you could propose upstream. And so you're really starting with something that's upstreamable and not going the other way around. And then so you can develop, you can add bug fixes and eventually things get merged into subsystem specific staging branches and they get tested. And we finally merge into a next branch where things get released. So the next branch is really like that's the Icebreaker kernel that's fully composed and ready to go. But it still has its roots in these feature branches, right? And so the final step after we release is we do a fan out merge into the staging branches to sort of reset the heads and everything back to what the released Icebreaker was. But we don't fan out to the feature branches. The feature branches sort of stay pure, so to speak. They just have the feature patches because that's what's upstreamable. And so if we look at one feature branch we can actually see how the upgrade happens. So normally a prod kernel rebase would be we take all the patches and we reapply them on the new base. And we could do that for individual feature branches but it's a lot more sustainable to say, okay well we could just do a merge and we could create our new 511 based feature branch and merge the 510 base on top of that to bring in the changes from the old base. And then so we say okay, that means we get to keep our known good shot ones in the previous version of the kernel and then we just have to qualify or we just have to validate this new one merge and any conflicts are sort of resolved in that merge commit. And if you can think, you wouldn't really be able to do this for prod kernel at all, right? Because one massive merge commit is gonna have every conflict but you could do it for each individual feature because it's actually something that can be done in bytes and in pieces. And so bug fixes really work well in this kind of workflow because if you say okay well there's a bug in my feature I can go to the oldest supported version of that feature and fix the bug, right? And so I have a buggy shot one introducing the bug and a fix shot one and those two shot ones aren't really gonna change I can merge it forward because the histories are continuous. And in really any branch in Icebreaker those shot ones are gonna stay the same because it's sort of stable and the histories are staying the same. So it's really powerful to say like, this commit fixes this shot one because that shot one is always gonna be the commit that introduced that bug. So how does this affect upstream? Well, if you remember prod kernel we have this sort of model where we were committing our feature commits into basically the staging branches and we were just doing fan in fan out. And so the thing we were developing on top of was all of prod kernel and then on top of that or prod kernel and then that prod kernel was on top of a really old upstream base. So it was really kind of difficult to sort of untangle all of that onto an isolated feature branch which is what we need to propose at upstream and to actually get that on the most recent upstream release. And then even if you do get them to pull apart you have to revalidate everything but with an Icebreaker feature branch you start it out with your feature branch that you validated and then merge it into Icebreaker but then you can merge the other way. You can say, okay well I have this feature branch that's been validated I can propose it upstream. And so actually proposing a feature branch upstream should be as simple as just a get rebase. So if you have get re re re enabled it really is sort of straightforward you run the get rebase and if you don't have it enabled it really should be okay well I know I resolved these merge conflicts already they're in these merge commits. I can just look up the merge commits and get that information. There should really not be a blocker to getting things upstream. So there are some risks with Icebreaker unfortunately. A big risk is we need strong feature branch testing. I think there's sort of this risk that feature branches can kind of turn into these filing cabinets that's just say, okay well I have these patches that are kind of all related to each other so I'm gonna put them together in a branch and I'm gonna forget about them and I'm gonna merge them where I want them to be but that kind of isn't very useful because we don't know if it builds or boots or passes any tests because we haven't tested it. So if we can take a feature branch and just with the feature branch build it, boot it and validate it then when we merge it to somewhere else we can say okay we started at a known good feature and we're gonna merge it now and make sure it works again versus if we merge it and we didn't know if it was good to start with then there's not much utility in that and we're just sort of complicating the process of getting these patches into a kernel and then also for upgrading the same point applies and then also dependencies between features can sort of cause some problems. I think really that the model of Icebreaker is sort of based off this idea that Linux is just this big ocean of patches and our contributions are these drops in the big ocean and so feature branches are gonna kind of be far apart from each other in what they affect and so they're not gonna depend on each other that much and that does hold true for the most part except we do have some dependencies between features and so those can take the form of like a non-trivial merge conflict it can take the form of one feature is using an API provided by another feature it can take the form of even just like some sort of difficult to understand performance optimization that makes the other feature only work in that situation, something like that and so to sort of resolve that we could do it on the staging branch, right? We merge one to the staging branch and then the second one that needs the dependency can sort of resolve everything on there and we figure out the conflicts right on the staging branch but the problem is that staging branches aren't carried forward when we upgrade to the next upstream base so we're really sort of losing that information because only the feature branches are carried forward so what we need to do is do emerge on the feature branches to resolve that dependency and so that does work but it sort of introduces this little problem of like okay well what's the rule for what can merge into what how crazy can we let these merges become what are the rules for saying okay well these two things should just be on their own feature branch versus these two things should merge and there's utility in that and it's really like something we have to solve on a case by case basis icebreaker is a lot more decentralized than prod kernel we have to put our trust in feature maintainers for them to actually create their feature branches correctly write good tests and all that stuff we need to put our trust in staging branch maintainers who have to sort of look at these merges as they're coming in to icebreaker and say okay are these merge conflicts non-trivial are these dependencies correctly resolved on the feature branches and then finally we need buy-in from other teams they need to trust us that this new workflow that's sort of new and confusing is actually going to pay off and we're actually going to be able to reduce our technical debt and improve things and so the final risk is we need to get things upstream in icebreaker right if we don't send things upstream then we can just keep piling commits into icebreaker and it grows and grows and grows and it kind of just turns into prod kernel again right and eventually we're not going to be able to keep up with upstream and it's going to sort of fail it's going to turn back into just the normal prod kernel routine so we really need to sort of leverage the fact that we're really close to upstream and we need to get our commits upstream so that we can stay close to upstream and if that doesn't happen we're just going to fall behind into the same old habits turn it back over to Andrew so to sort of I guess take it away from just theory and things that would be cool to do we'll talk a bit about how icebreaker is actually going in reality and sort of cool things we've found along the way while implementing and running through this project so currently we're on icebreaker 513 which if you think about it we were on a, I don't know, some many years old kernel and now we're, I don't know, I think two stable releases one stable release behind upstream that's pretty good it's a lot closer than we've ever been so one cool thing we noticed if you run the numbers is that our tree has actually dropped 30-something patches between when we started and where we are now and you might think like oh Andrew that's 30 divided by 9000 not that big of a number but it's a start it's certainly not something that was going to happen without a project like this so we hope to sort of keep seeing this trend we're actually working on getting to 514 now and we've actually dropped whole feature branches in that process because we were back porting stuff from upstream into our original kernel and that's technical debt right you're like back porting and then it's on the new kernel base and it's not qualified with the upstream kernel so in that sense we're actually reducing our technical debt and hopefully it'll become easier and easier as we keep going along and lessening sort of the number of features that are in our icebreaker tree some other cool things are by virtue of being close to upstream we've found some issues and sent fixes upstream or cc'd them to the stable tree to be back ported to 513 and I think that's something that wasn't really emphasized before with prod kernel by virtue of being like further back in time now that we're kind of closer to upstream we sort of feel the same issues I guess that upstream sees most of these were just build fixes but someone else cared about them besides us otherwise they wouldn't have been taken so that's a plus looking forward we plan to catch upstream around 515, 516 at this point kind of like a whole new world opens up right will be in a sense like riding the wave of what upstream actually is and at this point we think it'll get a lot easier we can actually relax the cadence at which we need to upgrade our tree I think the number one sort of issue with this project so far is that like Dylan says we have to place a lot of trust and emphasis on maintainers of future branches and the number one thing that sort of you get pushback on right is like you take it from this model where they only have to rebase every two or so years and all of a sudden you're coming back to them in this case like every three to four weeks saying okay can you help me fix this conflict with upstream and that's not something like they were I guess I feel like if anyone asks you to do something every single month when you're used to doing it like every few years you might not be too happy but at least now since we're getting closer to upstream we can actually relax the cadence at which we do that so now we can sort of conform to the release cycle for actual upstream so you know we can maybe do it every six to eight weeks instead of playing catch-up and doing it every four weeks and then another interesting idea that you know maybe we'll try out in the future is trying to test our feature branches of top release candidates this way we can actually participate in the testing of Linus's release candidates so it's sort of a cool idea right you know the purpose of having these RCs is so people can send in new fixes and wouldn't it be great if we could you know use our you know the engineers we have to sort of test their features on sort of on top of these release candidates as a mechanism to try and see if we can find bugs and send them upstream and you know participate more in upstream and then after that we'll try to open up the scope of Icebreaker to allow for more upstream development right now because we're still trying to catch up to upstream it's not really the best vehicle for developing a new feature you plan to test on upstream ideally if we were on this exact same stable release as upstream was it'd be a lot easier because you say like oh okay I'll send you a patch series on top of 515 but if Icebreaker isn't at 515 it's still like the same problem right you have to rebase one more time but it's like a smaller delta so hopefully that's easier and I think once we get to there once we catch up to what upstream currently is on I think it'll just be you know a good ride we'll be able to have a feature sent upstream it'll be easier for us to actually apply ourselves and find bugs and send them upstream to fix yeah that's don't have an end slide but that's it if anyone has any questions we'll happy to take them and answer them as best as we can I don't know a good way to convey who I picked in the black shirt we're starting by doing both Icebreaker and prod colonel and we're hoping to sort of see if we can move away from that rebase I think like one thing well it might seem like double work it does give like the Icebreaker project time to sort of figure everything out right like it's new it doesn't have all its processes figured out though by still doing I guess this rebase it's not the end of the world if like we run into a question that takes a bit to figure out but gives some breathing room so it's kind of nuanced in what parity is right so I guess the patches we have in prod colonel can be broken down into things you need to run binaries that are compiled at Google with our special libraries versus things that are I guess I don't know performance related right so parity in this sense is functional like it'll execute correctly not that it'll be everything in prod colonel otherwise like that's probably like a little bit of an intractable task right like it's you're not just like really deep in a hole you have to figure out a way to you know stop digging it deeper yeah um yeah maybe not yeah I think we do need to have a lot of platform support but I think that the benefit of that supporting Google hardware is that those patches don't actually conflict with upstream as much as a lot of changes to the core colonel and the core colonel changes are a lot more applicable to upstream um so you're right that there are a lot of patches that we're gonna end up having sort of sit in icebreaker but our hope is that those are actually gonna be pretty easy to upgrade each cycle and I would say that's a good question I don't think I have a hard number for you but I think I can also answer another point you were talking about like things were invented at Google before there was like a stable upstream API right I hope this doesn't come off as like kicking the can like down the road or whatever but I think there's been an emphasis on when we come across these like things that were invented at Google and now there's like some sort of like almost equivalent API upstream we kind of push folks in user space to use the upstream API instead so we can deprecate the thing that was invented at Google or is Google specific as long as there's like an equivalent upstream it's like usually like if it's a general use case right like there's hopefully something there or we can propose something that is general enough for everyone but fulfills the needs at Google I guess can you define cycle more? We sort of have an icebreaker LTS that sort of stays at the most recent upstream LTS and then we have an icebreaker upstream which is kind of a confusing sounding name but we've gone, we started at 510 and that's our first LTS and we're working on 514 now So that's three versions? Three? Yeah, three. Math is hard. I think like every single time we do it it gets quicker and easier, right? Like all the people that are involved like they know what's coming that they'll have to resolve conflicts and tests and unlike the side in terms of like automation there's more and more so it's kind of converging on like almost like it's getting closer to autopilot except for like you know resolving conflicts and debugging test failures I wouldn't go that far I think it's definitely a goal, right? But I think we're trying to get the process and then make sure like all our tests pass as a stepping stone but the LTS one is getting tested with I guess tests that would qualify as things we get production with in the blue shirt I think it says to repeat the question so he asked is prod colonel used everywhere or is it just something specific to cloud? So it's not, I think it's used in the majority of cases, right? But like you know you can think of exceptions like you know I'm using a Mac here or like our Linux workstation is not prod colonel and anything in production basically, right? There's a little bit of deviation between like cloud and general production but I think they're all versions of prod colonel. Oh. Oh, no. I think that's a completely different team. I don't think so. I think that's a completely different team. Yeah. Right, yeah like we're trying to get to I guess like a two kernel world, right? Where like ones for I guess customers that need stability and then the other one so we can qualify upstream changes against our specific Google features. Cool. I think, oh, you first then the hoodie. All right, so the question was how do we handle I guess merge conflicts with upstream and subsystems. So Google we have a, I guess a team for every subsystem, right? And this boils down to like the person that initially brought the feature into Icebreaker. They kind of like signed up for, you know dealing with these conflicts with upstream, right? And usually like if someone wrote the feature they have a pretty good idea of like how it should mesh with upstream. And so far we haven't run into any situation where it's like a change that can't really be meshed together, right? I think the hardest thing is when we have like a API that's upstream and an internal API that sort of do the same thing. And then you get to the part where it's like almost a feature parity, but not completely. So I think that's probably the only sort of edge case I can think of where it's difficult to manage. But even then I would only seen that like once or twice. Yeah, I think the biggest impact to reducing the merge conflict resolution workload is good testing. Because even if you don't know that there was some weird stuff happening in the merge maybe it appeared to merge cleanly but didn't. Just there's sort of a big piece of mind that you get from just having an automated test run on the whole feature that you can iterate fast on and see if things actually work the way you think they do. And the conflict thing isn't like entirely bad, right? It acts as a pretty good forcing function. If you get annoyed at the fact that you have to keep resolving these conflicts it kind of is like a hint like hey you should make an effort to either upstream this or deprecate the feature as something that is upstream. I think you had the next question. There have been some discussions of it. But nothing concrete. It's something we eventually hope to have, right? I think there's varying opinions on how useful it actually is, it would be. But I think it'd be a cool thing to try to figure out. I think it'd be useful if I could send my conflict resolution to Dylan for next time he might run into it. Hopefully we can figure out a way that you never have to resolve the same conflict twice but in the case that you do, I think that'd be something useful to share. So we have a lot of tests that currently are not upstreamable in our prod kernel repository. And so it is sort of a big piece of technical debt of how do we deal with this and how do we make that more upstreamable. And I think the path that seems most attractive to us is to sort of take the things and the shared libraries and the weird test partners we have that's sort of not upstreamable in Google and sort of turn that into a more upstream friendly thing by putting into self tests and sort of contributing to self tests really and making that better. In the red hoodie? I think it's, I think the using icebreaker to get internal patches like upstream, I think that's one of the main goals. I don't think it's like the other way around that bring upstream closer to prod kernel. Yeah, it's really, we're trying to drag ourselves closer. We're trying to align ourselves more with the upstream community. I think it's the more pragmatic thing to do, right? Like instead of being forked off from many, yeah, yeah, it's a numbers game, right? Any other questions? Cool, thanks for coming to our talk. I hope you learned something and enjoyed it.