 bit of history as to where the project started to give you a little perspective of where we started and where we are today in the grand scheme of things. This project started in 2007, in November 11. And the main motivation behind the project was to help us backport Mach 211 and F5K specifically and then we started adding more drivers. We also saw vendors starting embracing different strategies to backports on the 80-11 ecosystem. And that for a period of time actually led to some forks of the Mach 11 module and we in the community felt that that was slightly big of an issue actually for us because we didn't get patches back. So we decided to start adding more drivers as a compromise and see if we could pursue vendors to actually use backports as part of their distribution mechanism or productization. So we started growing with more drivers. So we actually started with F5K, some Intel driver, USB, Mach 12 driver as well. Halke started participating in the project really early on as well. I had tried killing the project around September 2008 given that there was a large infrastructure change in the networking specifically in multi-key stuff. I pretty much really gave up completely on it and I think Johannes had also given up on backporting that stuff. So because it was at that point still pretty heavily used by a lot of the folks in wireless, I was asked quite repetitively by quite a bit of people just to bring the project back up again. And based on those petitions I brought it back to life with the compromise of forking it off and not providing backports beyond 2627. It just turns out though that I was able to backport that stuff eventually though but it was really difficult. Just quite a bit of pain in the ass and pretty convoluted the code. So it was renamed well actually at that point where we started backporting I think most of our new 80 to 11 drivers. We had a large bump in contributions to the 80 to 11 subsystem and obviously vendors started participating to the point that now I think every vendor is contributing to the 80 to 11 subsystem. It's just the way things are now. So just going to recap general strategy and architecture of how it is that we do backports. You guys may already be familiar with this but I'm just going to recap that. So this is what backports looks like when you're at a company typically doing general backport development on device drivers or even kernel components. You if death a lot of stuff and you have code for specific kernel revisions. This is obviously a bit complex debug. It's a very complex to read. It's a nightmare to maintain as well. So the strategy that we devised was essentially to try to actually take part of the code that you actually are if deathing and stuff that into a module or a header file and you basically just add a wrapper and you basically reduce the number of lines of code that you're actually changing. So the previous change is actually reduced for one driver into one line change for two drivers in this example and you obviously provide a nice clear commit log message. Now the future of backports looks like this and I will say that this is actually just an example of a type of collateral evolution. I'm curious here and I want to keep this more interactive. I want feedback from you guys. I'm hoping most of you guys hopefully are developers and if not I'd like to get a sense. First of all how many guys are not developers? Alright cool. How about how many of you guys have not used SNPL? Okay. So SNPL fortunately at this point is pretty broadly used and Julia had started the project as a research effort quite a bit ago and as a way to try to enhance the Linux kernel to enhance the rapid evolution of the Linux kernel. She essentially worked on Coxinella which is an engine to allow you to express how you want to change to be done in the Linux kernel in a grammar and the grammar is essentially semantic patch language and this is what the grammar looks like. So you basically condense two changes down into some sort of language here and you basically can apply that. Now this is not yet integrated and I've been meaning to integrate this for a long time but it's just taken quite a bit of work and I've been pretty busy so I will say this though that I do plan on working this eventually though seriously and this is the way that I envision backwards unfolding but it's just one of the types of collateral evolutions that we have to address. Collateral evolutions are just one of the types of things that you need to backport. Once you start getting to the specifics of the architecture or how you actually need to backport you get a better idea of what constraints you have. So how fast is it growing now? This is a picture since we actually started on the fork actually of the compact wireless to generalize the backports for different drivers. As you can see there's been quite a huge significant bump recently. It's pretty dramatic. I actually skipped quite a bit of kernels because I haven't even had time to update the stats for the other kernels but it's quite significant. The way to look at this graph is to think about the gray area that's dark in the bottom as that's the lines of code that we're actually pulling from the Linux kernel directly. That's code that we're taking from the Linux kernel. The red stuff which you can barely see actually consists of the changes basically these guys the negative and the plus that's the red lines. Now the green stuff actually is the code within the compact module or stuff that we actually have like our own header files for example. So as you can see the strategy is very different you know and I've argued before a long time ago that I really did believe as you can see around this time frame right here I actually had believed that the green stuff which is the compact module can actually be reduced to a small size and for implementing solutions there to back port you can actually back port more subsystems without actually increasing it significantly. So as a proof of concept well we still have not only proof of concept this point but it's a decent way to do back ports. But this is further proof now that it's really just obvious that by just generalizing the back ports into a module like compact and actually using stuff in header files it's just clear that you really don't have a large overhead. Now there's some problems with this in scaling and I'm going to get into that in a bit. So how and why is it growing? Well part of it really is these conferences. This is why we're here. We come and we talk about stuff that we're working on and problems that we're facing and we try to look for solutions. So I started socializing it first at Linux Plumbers in 2011 Santa Rosa. That's where I actually met Julia and I had my own concerns over CMPL a long time ago. Julia at that point actually expressed to me though that developers don't really need to actually know SMPL to write SMPL given that she had some researchers working on inferring SMPL provided that you give it a set of patches that you could actually extract SMPL from it. I actually tested that and it seemed to work. It's still rough around the edges but it's pretty good stuff. Anyway apart from that though that gave me the hope that there are larger architectural considerations that we can embrace in back ports that allows us to grow even faster and why I kept working on this project although I never really wanted to in a way but now I actually gain, I mean I personally see a lot of benefits for it for evolving the Linux account. Now I also had sponsored a student, a mentor to students under the Google Summer of Code project under the Linux Foundation. He actually worked out at Turkey and turned out to be a great student. He ended up back porting the DRM or GPV drivers which a while ago I actually considered this almost impossible. In fact I think Alan Kotz believed that this was likely not going to be possible but fortunately it was possible. I will say this though. The first back port of the GPU drivers actually ended up working fine. There's been some huge changes on the kernel on GPU though and right now it's not really working but part of that I'll get into why in a bit. Then I ended up socializing the project further at the Linux Collaboration Summit in 2012. There I met with quite a few folks who were already doing quite a bit of back ports and I learned that the Linux Foundation had a Linux back ports working group. They had a series of efforts and we decided to just combine efforts and just to work together. So that gave the project quite a bit of push. The Linux Foundation provided a build box. So that's part of the issue that we had is as you start back porting you actually need to build across different series of kernels. So it takes quite a bit of work to actually do all this stuff. So we ended up writing CKmake which is also across kernel compilation utility which back in the day we basically just probably did back ports. But really never did much testing other than assuming that it works or I do a run time test right before at least against my target kernel. CKmake actually ensures that prior to giving the terrible out to the community we're actually running a compilation across all kernels, all series of kernels that we supported. I'll get into that in a bit too. The other change that was made was move to project kernel.org. So you can actually just go to kernel.org and download it. Specifically just go to backports.weakie.kernel.org and you'll see the page, you'll see the release page for it as well. Backporting what you need from the upstream respective header file that allows you to actually take the code as it is exactly upstream and not make modifications because you're just using the same included, you're just providing that for older kernels. So again the strategy here is to not modify the upstream drivers as much as possible. Keep them as pristine as possible and obviously get the developers to work upstream. Other changes recently obviously apart from GPU we got CPU contributed by Intel. We have the low rate wireless personal network drivers contributed. And then as a proof of concept I just decided to go ahead and backport media and regulator drivers. I think that will be the last time that I actually do proof of the concept like that because it was quite a bit of work and quite a bunch of drivers. The proof of concept actually was pretty worthwhile doing though given that the regular component actually is an internal feature that you can't really backport and you can't backport because it's not modular. Kernel doesn't make it modular. So that actually got me to looking into whether or not you can actually backport something that's internal either as modular or maybe even target internal integration. I'll get into that in a bit. So what do things look like? This is the array of subsystems that we have backported at this point. We have about three engaged core developers that would be Johannes, myself and Hake. I think we can rely on each one of each others I think for general ideas architectural considerations or even if one of us goes on vacation it's a bit different. It has happened where Hake has made releases too. We're all busy though and this is kind of like work that we do on the side we think of it that way. We have great random contributions I will say I'm quite impressed by the number of random contributions that we get and number of contributions that are quite significant. It is used broadly and at this point quite a lot Linux distributions use it. It's part of the reason why actually at the Linux Foundation Collaboration Summit I was asked to actually generalize the project and streamline the effort a bit. Again there was a large amount of code churn in the latest change in renamed to backports again so it was just wireless compact drivers and now backports. That's it. Hopefully there will be no further renames required. We have over 830 drivers backported. Again if you guys have any questions just go ahead and just ask. I like this to be interactive and I want to get more feedback likely I'll get the feedback in the next few sections. This is what running CK make looks like in the different array of boxes that I have had. This is what it looked like when I was running it on my laptop back in the day. This is what it looked like when I got pissed and I basically ended up when buying building my own box to build it. It got down compilation in 33 minutes. After HP, Suzy's and Linux Foundation's collaboration to get us a build box, this is what it looks like now. We have a huge build box with tons of memory enough that we're actually running the entire build test completely in memory. So we have the entire series of maybe 30 kernels or something like that all in memory in RAM. We actually have apart from that all the source code that you actually need. I think Ccache is actually even running at least for me in memory but at this point I think that's kind of been redundant and I've been thinking about that. So at this point everything's running completely in memory. It takes about 23 minutes last but I'm going to get into a bit of the details here. This is quite cumbersome though and although we can scale and we throw more machines to this to actually reduce the amount of time it takes to backport, we got to start thinking about how it is and why do we want to scale. So again this is a series of tests. This is what the CK make actually output looked like. This is actually 8 stop results. It's kind of like top. So this is CK make running. It basically just uses N curses. It's multi-threaded and it tries to really just optimize the usage of your memory and CPU to do cross-convolution across what it basically just does and starts a thread and goes and compiles a kernel with makej-whatever that we have set right now. I forget if we ever optimize that. I think you ended up changing that last time for some number, I don't know. So basically it's a thread per kernel and basically even per kernel we actually are throwing threads out and basically we're all running this symbol. Yeah and again look at the memory usage here. It's using the server pretty significantly and we are able to actually share, since we are about three core kernel developers working on this, we have enough memory to actually have each developer's entire work realm as running in memory as well independently. We don't share the memory. So we actually run, well I run these tests and I asked for at least a quarter of other developers to basically run these tests prior to actually submitting patches. If someone sends me a patch I'll eventually have to run these tests as well prior to even making a terrible release to the community. But now let's get into a bit of the technical stuff and how do we scale, how do we make this more efficient? That's a lot of kernels. That's about 31 right now. Right now we're up to even 3.11, right? But if you go to kernel.org, this is the list of kernels that you get and I'm not sure if you guys know this but we don't obviously support every single kernel from like 2.6.32 up to 3.11. There's gaps in that, right? So let me make that clear. There's gaps between the kernels that we have listed and we don't support those anymore. There's no one maintaining them. What's that? 2624? Oh yeah, correct. So that's actually the point. So the thing is that we're testing here as an example because there's more now, 31 kernels. Kernel.org actually only lists a few kernels. If we reduce this list of arrays we simplify and optimize the time that we spent on backporting. Why the hell are we doing this? Well, it's really the embedded market. The embedded market has slew of OEMs, vendors, random distributors shipping anything on random kernels. They don't really have a good strategy to upgrade. Sometimes they can't upgrade because they don't have enough resources. Random reasoning. So question that I'd like to ask you guys is should we use backports as a carrot to pursue vendors and random folks out there in the industry to work on the same kernels listed on kernel.org and take a policy to only work and support the kernels listed on kernel.org? Should that be a policy that we embrace? The issue of course, as I see it is that if we do that we lose the little carrot that we throw out there. Let me get into a bit of stuff though. Before we get into this whether or not we should reduce the array of kernels that we support, let me get into project goals just so we're clear. The project intends to be up-to-date daily with Linux Next. There's a series of issues that I'm going to get into that have prevented me from actually doing this on a regular basis. We also want to help avoid excuses about not working upstream in the kernel. That's pretty much why the project got started back in the day during the out of 5k days when we started getting that merge into the kernel and we started working on improving a DT11 sub-system. We want folks to work upstream. We don't want people to go on branch out and do random stuff. By doing the backwards and actually doing the backwards automatically for folks we can tell everyone, companies, that they can just go and hack upstream. Their engineer should not be working outside of some random branch. They should just be working upstream and the deliveries are provided through backwards as an example or you just go and download the latest kernel. We also want to backport to random kernel releases. Obviously there's a series of kernels that are supported by random vendors in the market. That's originally one of the project objectives. Of course we can negotiate that now in hindsight of how the project is growing and the architectural considerations that we have now. Again, the carrot here is that we'll do the backport for you. At this point the infrastructure is decent enough that backporting a network driver, even a random non Ethernet driver consists of a one line change. One line change to backport a network driver to a series of random kernels. I think that's pretty great. That's the carrot. We want to get people to work upstream. Contrary to Linux kernel, for some kernel developers we're making the intent clear that we don't want proprietary drivers to make use of this infrastructure. We do this through our technical means. We make it clear, first of all, on the wiki. Another way is actually ensuring that we use export symbol GPL when we actually take code from the Linux kernel and actually backport it for older kernels. Just to be clear, we don't want proprietary drivers. If you're a fan of not promoting proprietary drivers, we'll use infrastructure and get your company or whoever to basically use the infrastructure. Of course, world domination. How do we grow? Obviously, we can socialize project more. This is why I'm here. But should we grow? The reason why I state that is that it's getting really, really, really, really big. It's complex. There are technologies like SMPL that I believe will allow us to grow further. It's no different than Linux kernel and why SMPL was actually born. How do we grow Linux kernel faster? How do we get our developers to contribute to the kernel faster and let's work more intelligently? I think that it's reasonable to actually take a position and say that we should only support the kernels in Linux and in kernel.org. Maybe the benefits of saying that we support every kernel at this point can likely be thrown out the window in favor of actually pursuing a light, tight architecture where we know that we have kernels actually maintained. Question for you guys, what do you guys think? Any thoughts on this, guys? Anyone? Alt-TSI is supported. We'll gladly support it. Right. Do you think it's possible for this as a strategy to be socialized maybe at the embedded Linux conference and say, hey, this is where we're going to do anyone have concerns or at least wait until then or just go ahead and do that? It sounds like it seems that Alt-TSI is helping the embedded market then. I see. Do you think there's enough education already in the embedded market to accept that as a strategy? Only supporting a stable kernel? I'm with you then. I'm for it at this point just because I don't have enough time and I don't think other folks have enough time either and I think we need to start making some compromises but at the same time keep in mind the project goals and see if we need to restructure these. It used to be that we obviously needed to support all these random kernels just as a large carrot, throw people, yeah we'll support anything, whatever but it was a lot easier back in the day when we only had a few drivers. Now we have a huge ecosystem. It sounds like it's positive feedback towards that general direction if we do take it. True. Again, I've had enough with proof of concepts on the project at least. I don't think I'll be adding new stuff unless I know it's specifically for a reason. I will mention in a bit why I actually worked the median regulator though. The GPU drivers is an example of an issue that was actually Google Summer of Code Project. It was great, I mean it worked. It was really awesome to see that actually go upstream into backports and actually get at least working in my system but obviously now it doesn't work and part of the issues is it's a student that actually worked on this and he did a great job but he's obviously busy without the stuff and finishing his schoolwork. I obviously can take the onus of actually going ahead and fixing any of the issues there. I mean I've actually done a bit of the work now to backport even further some of the GPU requirements but it is quite a bit of work. So if perhaps one of the things that we should do is not add new subsystems unless we have someone who's really interested or a vendor who is interested and is going to back it up I think that that's probably something reasonable. The only thing that hurts me at least to see is that removing GPU support, that's quite a bit of work that went into that. It's not trivial so it'd be great if there's anyone out there, one of you guys who might be interested in taking GPU drivers. Yes? Yes, absolutely. No, no there's nothing automatic. I mean for the wireless subsystem for example we just expect that the user would know and they would go and upgrade anything if they want new features but we're not supposed to break backwards compatibility and I don't think the X drivers should too technically. That would be kind of weird but they shouldn't. We should be able to run newer kernel version GPU drivers with older user space but we should. Yeah, no I agree. I agree. So this is just an example of something that we did. We accept it into the project we took it as a home and said great but obviously you don't have a maintainer for it. So lesson learned, should we not accept stuff into back ports unless it's actually properly maintained? Probably. If we want to scale then I think that's reasonable so maybe even if it hurts to actually lose all GPU stuff maybe we just got to kick it out. So again if there's no takers that'll be removed. So the future for back ports. One of the biggest issues is that we do have module signing being integrated by Linux distributions. I know Fedora was one of those eager beaver Linux distributions that really wanted to embrace module signing. This essentially means that back ports cannot be used by Fedora unless you actually integrate into the kernel somehow. Module namespaces. Part of the issue with back porting is that you end up actually assuming that a Linux distribution won't try to back port something on their own as well and that can conflict with some of the names that you come up with. So what we ended up doing is just prefixing most of the if not all of our exported symbols with back port you know Linux back ports or back port something I think. I forget what we have. It's not an easy I mean it's a pretty you just need to do a define of the symbol right after you do the once you put into the folder file to pain the ass though. So any clean had worked on introducing module namespaces in 2007 and Rusty actually rejected that work that introduced extended the modules to have a namespace definition so that way you can actually use symbols within specific scope. That would have worked well for back ports and I think it's something that we should consider if we want to avoid the whole in you know prefixing of our namespace stuff and the back port stuff. I still haven't looked at addressing this it's something that you know would be quite a bit of work and again this is you know it's not that easy and those really need to be taken careful architectural considerations need to be taken out before that gets introduced. The other thing is obviously we can just resubmit it take the work and resubmit it upstream with the intent of stating that this is specifically for back ports and also addressing module signing problem perhaps maybe. We do have make many config but we don't have make a local mod config. If you guys haven't used that that's a neat way to actually tell your kernel but to build a new kernel you basically get kconfig to actually look through your modules that are loaded and then actually look at your kconfig and actually build your kconfig to match what it is that your kernel should look like. Your kernel configuration should look like to enable everything you have on your kernel. It's a pretty neat feature. I highly recommend you guys use that but we don't have that for back ports. It's a bit more complex given that the arrays of the spaces that you have to look into and what is important and what's not actually has to be done through for every kernel that actually supports. I think it's probably a bit more reasonable if we only support the kernels listed in kernel.org but that's another thing we need to work on. Non-modular kernel support. That's actually why I ended up working on the median regulator support and by this I mean that the idea would be that you provide back ports infrastructure, you give it a kernel from the future, kernel from the past and you basically take stuff and you throw it into this kernel and now you basically can build it in kernel. The way that I think this can be accomplished is by modifying the init routine at the kernel and providing our own back port to init and stuffing into that back port to init. In the combat module what it is that we need to back port. For example, a full replacement of the regulators but obviously that means that you need namespace because you want your new regulatory driver stacks to use in your namespace. You basically, the other thing that you would have to consider as well, you want to disable the internal component then. You probably should. And how do we do that? Are we going to do that upstream in the kernel? Or what do we do? And obviously SMPL integration. Now I do plan on working on SMPL integration and trying to see if I can get internal support tested. That's something that I do commit to doing and it will take quite a bit. I hope to start that in October. We'll see how that goes. Any questions? I'm sorry? Is the firmware back ported with the driver? So the firmware will have to be used from the latest. You basically will have to use the latest firmware to run it on an older system. So part of your general, you basically what you should do is if you're using a back port solution for a driver is you should use the latest firmware as you would for the latest kernel. There is no, but there's no requirement. Is there really a requirement for back porting firmware? Yeah, that's what I'd recommend unless there's some issues with that. I can't envision something right now. The latest firmware? Well I know Canonical does actually embrace back ports and they do provide it for 80 to 11 Ethernet drivers. Anyone from Canonical here? Okay. Are you from Canonical? So I know Canonical does embrace back ports, so I'm not sure if it's used. I see. Okay, okay. I see. Yeah, I think, well from a distribution perspective, I could envision perhaps making at least a package dependency upon a newer version of it, but the firmware? Okay, I see. Yeah, I think that might work, but again it would just be more work on the firmware side. Ultimately the way I envision this is for a user to say my system doesn't really work well. I want to try to back port, integrate something that's back ported and I basically get this nice UI that says, go ahead and take the latest and greatest from this, this and that, and take it in and see if it works. If not, then revert. But of course this whole thing, it would need to be managed really well and I think that we need to reduce the array of kernel and a lot of stuff, but it requires a bit more investment is what I'm saying. Any other question folks? Any interest? Any wish list items? Any complaints? Anything you guys want to talk about regardless regarding back ports? I'm sorry, can you say to you? Alright guys, well thank you.