 Okay welcome everyone to my status of admitted Linux talk this is a talk that I have given on occasion I apologize in advance I have a lot of material to cover it's gonna be a little bit like this and so I have about a little under 70 slides and about 35 minutes to do it in so that you can calculate that in your head how fast I'm gonna need to go these are the major areas I'm gonna go over the basic purpose of this talk is not to give you in-depth information about anything maybe not maybe not even information about anything but basically what I want to do is review the last year's worth of things that have been going on mainly in the kernel community but also throughout the embedded industry so that you have and then there's a bunch of links in the presentation that will be uploaded to the links foundation site there's a version one up there but it's different from this one because it was a week ago and stuff keeps happening but basically the idea is if you see something interesting that you want to follow up on hopefully you'll have some pointers and at least knowledge of this I found I've been working at embedded companies for a long time and sad to say in the CE space were always a couple of revs of the kernel behind and sometimes we don't we're not aware of some of the interesting stuff that's going on so hopefully this will help you out so let's start with kernel versions the pace of versions is consistent and very good the kernel processes are working really well I'm sure they'll talk about it more they're having a kernel summer right now as we speak and they keep improving the process if you look at the history of the kernel releases for a while there last year we're averaging about 70 days that actually has shortened down to about 63 65 days so I think Jonathan Corbett I miss Jonathan Corbett's talk earlier in the week he made a prediction I think I think he said Halloween I am not I'm cynical I'm gonna predict that it's gonna be on November 8th so I don't know I don't I I should revise these slides when I get new information but I'm kind of lazy that way so so I'll say this next one will be 68 days closer to the 70 day version we've been having but but so there's all all the stuff that's kind of in 312 is already in there one of the things I've done this year with this talk is I've switched it around so there's a whole bunch of information that I gathered up per kernel version for each of these 36 through 312 and I'm actually not even gonna cover that so it's a big departure so I've left the reference material it's at the it's in the slides it's at the end of the presentation so when the slides are online you can go look at specific features and embedded that have happened and I I see people are still coming in so why don't we do one iteration of this there's not a whole lot of empty seats but if everybody could slide if you've got an empty seat between you and the middle aisle if you could slide towards the middle aisle or slide away from well slide away from the edges whatever an edge constitutes for you so people can sit down as they come in what no you just you're it's dead center so you're fine oh yeah yeah slide away from Jake Edge so boot up time so what's the status of boot up time well pretty much if you put some elbow grease into it you can get the kernel brooding in about one second but it does still take a lot of work per product you there's a lot of tweaking you have to do there's lots of resources available there's a lot of presentations a lot of materials on this so the information is out there but you kind of do have to work on it yourself to get this done there are a good a good recent presentation I found was this one that I have a link to on slide share so there's a there's lots of techniques and unfortunately due to the nature of embedded a lot of times you have to go in and just manually apply these techniques yourself to your project so the curls in pretty good shape but user space that's a whole another story okay so there's a lot been a lot more focus on user space recently in terms of boot up time angstrom is using system D is cone in here can I smack him so system D yuck okay I I know how to do RC scripts but I don't know how to do system D anyway but that's the type of thing we're gonna see these accelerated boot up time systems coming up in user space Android is really bad but there are people working on it and there are commercially available systems for doing Android booting a little bit faster most of them involve things like snapshot booting taking snapshot of the systems Jim Huang of Xerox lab I'm not sure if that's how you pronounce it did some really interesting work and showed how to get an Android system booting in about 15 seconds and he was doing some really cutting-edge stuff involving checkpoint and restart on the process level and so if you're interested in speeding up Android booting definitely recommend that you go look at his slides he's got some great material there in terms of graphics we're seeing in general in the industry not you know embedded covers a wide range of territory and so we're seeing in the industry you know there's still lots of devices that have you know little LCDs displays but we're also seeing at the top end we're seeing a lot higher resolutions and so a lot of interesting work around the GPUs the buffer management involved with those movement away from frame buffer you got this crazy stuff coming out of Google render script which is basically compile compile on the device some of the GPU specific materials to optimize for the specific device at runtime but one of the big issues because we're moving so much data around is the buffer management the we have these big date big buffers and so you really have to be paying attention to how many copies of the data you're doing and also an area of graphics is always a sore point for Linux is embedded GPUs so I don't know I don't know of anyone that's actually shipping commercial production well I don't know of any maybe you guys do shipping with an open source drivers for their GPU so there are however and this is something that's changed pretty recently it's pretty neat that there are lots of SSE GPU open source projects going on and they seem to really like these anagrams so the Lima project is for the Mali GPU the etna Veeve is actually Vivante spelled backwards great is Tegra free Drino that's not an acronym that's so but there's so there's a lot of GPU work going on in open source and that's actually a really good thing and you know I thought maybe pigs were flying or something and video is even helping out with the nouveau driver which is pretty cool and so there is a shake-up going on so it's surprising that even this late or I don't know I kind of think of it as late in the evolution of embedded that we're seeing different chipsets different GPUs jockeying for position you see that Molly and Vivante just in the last year have really started to come on strong and some big shifts in the GPU space what I I really should have attributed this I got it out of some article online so yeah it's confidential I can't reveal where I got that from actually I just made it up right so so in terms of file systems so we there's a there's still interesting work going on with file systems kind of the biggest news I think in the last couple years has been the movement away from Ron and flash over to emmc so you get this block layer between you and the flash and that has some interesting effects it's kind of well understood in the file system space that Linux file systems were not designed for block devices with kind of zero read latencies you know there's no seek head latency there's no rotational latency and so flash is capable of a lot more I operations per second and that affects the design of file systems and so we're still in kind of a process where we're tuning the existing Linux file systems for flash devices and so one of the things that we've done in the CE workgroup is we hired a company to go out and look at some of those tuning options existing configuration options for the Linux kernel and for existing flash file systems to indicate kind of help developers choose which file system might be more most appropriate for them the three that we looked at in our study were ext4 butterfs and f2fs and then measure the effects of different tuning options for those so there's a result document that's on the elinux wiki if you're interested in this in the performance of your file system on emmc highly recommend you go look at it the the the executive summer summary of the entire document is well the tuning options depend on your workload what a surprise so but you can look and they have all these different workloads they've run and stuff and I think big news in the file system space is this f2fs which is a flash friendly file system developed by samsung this is mainline in linux version 3.8 so a lot of us who are running on older kernels have not seen this yet or had access to it they just added support for security attributes in the latest version of the kernel it's log structured it's way too complicated to go into all the details here there's a really really excellent talk by the Samsung developers on this both at elce last year and elce in the u.s. and I heard that moto x is using this I heard this from another Sony mobile developer and that there it's getting really good results so I used to have this bullet point used to be I don't know how good it is but I've heard that it's pretty good but I don't have numbers anyway so that's something to look at if you're doing mobile devices you really should be looking at f2fs and and see the see if it works for your workload and then I would re-remiss if I did not mention the x-fat incident I don't know if you're familiar with this but kind of a cloak and dagger type of story so the x-fat file system is the file system that was standardized for sd cards and so it's almost a requirement to support it if you're in the tablet or mobile phone space or cameras or any of those devices that take a plug-in media so there was some code that was released by a Russian developer and it was liberated from Samsung we don't know exactly how this came about we're not sure about the license and I think it had been shipped as a binary module by Samsung it looks like if you look at the source code maybe some of the code was derived from the kernel and so it should have been gpl Samsung released the code as full gpl a couple weeks later if I were you I'd be really careful about using that code and I'll just leave it at that there are there are ways to interface with the x-fat file system at I'll just tell you Sony's solution I'm not recommending this to anyone but our solution is to access it from user space so that we're not and we're doing it with non-gpl code because you have to pay royalties if you if you work with this file system and so there's and royalties and patents and gpl don't go along together so enough said actually more than too much said actually so memory management yeah it's all confidential so yeah so the ion memory allocator is the new Android memory allocator to allow sharing buffers well I've got a slide on it allows sharing memory buffers between different subsystems in the kernel and this is again to reduce copies different devices have different memory constraints whether the memory can be cached whether it needs to be contiguous so whether it's dma able so ion can detect select the different memory is to fulfill these constraints so that different subsystems can communicate and and have the appropriate memory for them but so this is a pretty neat system and there are other things in the system like dma buff or in cma in already in the kernel this has some kind of neat unique features but it also is Android specific it uses arm specific page accessors probably have a difficult time getting mainline but I know there are people already working on looking at it and seeing if the ideas can come into those other subsystems or or ways to make it more generic to more suitable for mainline so power management and I've got a I'm already behind okay so if you look at the evolution of power management we started with some easy stuff suspend resume stuff that kind of comes over from desktop and laptop and then we started working on voltage and frequency scaling longer sleeps with tick reduction runtime device power management so we can flip on and off individual IP blocks in the in the kernel and a big thing race to sleep try to get to sleep as fast as possible which is kind of the usage model for mobile devices most people except for teenagers are not on their mobile devices active 90% of the time but so most of our times our phones are asleep but so we've already done a lot of stuff with power management and so the new stuff tends to look kind of crazy and so I'm gonna talk about these real quick so there's auto sleep powerware scheduling including big dot little scheduling memory power power management and full tick list and I apologize for going so fast but I'm gonna go through these really quick so auto sleep is basically wake locks under a different name and this tests Rafael theory that you just need to rename it to get it into the kernel and and the answer is yes that's what you need to do to get something into the kernel powerware scheduling so there's this there's a lot of schedule and some of the stuff is not mainline yet sleep that auto sleep stuff that I just said is but power was scheduling there's we're meetings on Tuesday about this a lot of this is still out of the kernel but the basic idea is to select processes try to migrate them off of CPUs so that you can put more CPUs in an idle state and conserve power and that's kind of the whole thing about big dot little so in big dot little the idea is you have some very high powered processors that can do work fast you have some low powered processors that can do work very power efficiently and somehow you balance the workloads between those this is kind of what I think big dot little is like so it's a crazy scheme and it's going to be hard to get it to work right but if you can do it it'll be kind of neat I don't know so fast efficient I don't know one of those two so there's a whole bunch of stuff about big dot little scheduling and how to achieve that there's stuff with multi cluster power scheduling the big dot little actual scheduling stuff is often called in kernel switcher and because that's because it's switching the workload between the the more capable processors in the lighter lower end processors there was some really interesting work that was reported on elixcon Japan by some guys at rena sauce we're still waiting I don't know if it's someone raised their hand if they know if big dot little is actually shipped in anything and we actually have real-world results I don't know of any okay so I'll take that as a no and then memory power management what has it shipped what kind of product was it in oh I'm sorry they were laughing next door what so what what was it in sorry okay oh the s4 really oh okay well smoking Samsung oh it's not okay so it's in okay so a lot of this stuff is kind of dribbling in so it's pretty interesting the other thing in power that I'll get to really quick is device pm and I mean memory power management and so this is kind of the same thing as with the processor except instead of moving stuff around between processors you're moving around between memory regions so that you can actually shut memory banks off and so and I don't have time to talk more about that full tick list this is another thing in order to isolate CPUs we have the no hurts option in the kernel already but that's for if you are if you're idle this allows you to shut some CPUs off completely even when you're active by migrating processes off of them or by keeping keeping them isolated from the rest of the system and there's information on that okay so system size some of the things that have been going on is volatile ranges I don't know if you're aware that is but that's the ability for processes to hand memory back to the kernel and say hang on to this I if you need it for someone else you can destroy it you can take it away from me but if you don't need it for someone else keep it around and I if I ask for it later give it back to me so there's really good for stuff like browser caches where you're you can have the performance associated with having that caching but if you get low on memory it's a voluntary relinquishment to the kernel of the memory so it's it's really good there's also some work that Lex marks did similar type of thing with having a broker manage voluntary memory regions a lot of people have been working on different systems for reducing some of the major components like the libc so some work on bionic libc and eg libc particularly in the area of configurability I did this big project this was at my former company which was Sony I'm now working for Sony it's a long story and if you've ever worked for a mega corporation you know how that works but anyway so this is I did a bunch of research and looked at some existing research the thing I'm most excited about in the near term is this set of patches called the link time optimization patches I was able to apply them on arm and I got an immediate completely free 380k reduction in kernel size and that was not even putting some work into it I did some other things like system call elimination and kernel command line argument and elimination the academic research on the kernel indicates that about 50% of every kernel is completely unexcuted code and so there are these systems for trying to reclaim that either to compress it or eliminate it through some other kind of really difficult heuristic mechanisms at the linker level I've got a presentation on that if you want to look at it in terms of security I'm going to just blow past this really quickly the interesting thing in security I think is that we now see mainline Linux security systems being used in embedded product that was not true just a couple of years ago so we're seeing smack has been adopted by Tizen it's got a simplified rule set and in the security space simplified means 40,000 rules but also Android has adopted SE Linux and I always thought and SE Linux was going to be way too big to be adopted in the embedded space but the NSA blessed their hearts besides spying on people they they rolled up their sleeves and reduced the rule set for SE Android and got it down to a 71k policy size that is like amazing so we're now can actually use some of these hot what I would consider high-end security features mandatory access control that type of thing in embedded and that's pretty nice there's also a really excellent talk by a guy David Safford talking about how to secure low-end devices without adding additional cost to add things like detecting firmware modification preventing modification doing signed updates and all that stuff and I'll let you look at the at the slides for that and then one thing I saw at LinuxCon Japan is really excited about is a thing called K-Tap for years we were hoping that in terms of tracing system tap would be available but they just never seem to get the cross compilation stuff right and but this K-Tap actually has an interpreter in the kernel and so that is I think a pretty exciting new project to look at for kind of future tracing stuff and what what presentation would be complete without some kind of discussion about device tree so yeah I don't know a tree with yarn on it I have no idea how that relates but let me cut right to the chase I apologize to the device tree guys I know there's some in the audience I don't like device tree it's it okay so it supports single Z image which is really important and it does some really good things it requires drivers separate separate hard not hardware configuration I've learned that that's the wrong terminology the hardware definition from the code and so pushes the code away from platform data structures but it offends my embedded sensibilities and I have to explain what I mean by that so I come from the old school where man you compile the kernel for that hardware dang it and and and it was optimized for that so I think in the process of moving to device tree we're losing the ability to statically configure and highly optimize so since I spent the last like year working on how to statically optimize the kernel the stuff in device tree you lose all of that at least that I've found and so I'm actually kind of trying to think of some ways that I can kind of reclaim that it's also kind of a royal pain so there but it's a new requirement for implementing arm boards port and drivers I found it a little bit complicated to use there's numerous device tree presentations at this conference and it is what it is we're gonna have to to work with it and and all of these complaints that I have listed on this slide are actually being actively worked on and so this is a bit of an unfair slide let's see I'm just gonna okay so I'm thinking in terms of things to watch looking forward we have those Android features the volatile ranges and the memory allocator that have to do with with memory management we'll see a lot more device tree churn we're probably gonna see a schema analyzer or a schema checker validator I guess the right word for device tree we're gonna see some maturation in a lot of the documentation and probably some more changes to some of the infrastructure for that and power we're scheduling the other big thing kind of looming on the horizon it's been looming for years and years is non-volatile mass memory and what do I mean by that I mean persistent RAM is what I'm talking about and it comes in various forms there's phase change RAM there's M RAM this stuff has been kind of lurking around the edges for years and years I actually saw a demo five years ago of a phone that had M RAM on it so the RAM when you turn the phone off the RAM state held its contents if this stuff ever makes it to a price point that where it gets into embedded devices it's gonna be really really interesting Linus had some interesting remarks about it last year at Linux con he says it's not gonna change kernel key algorithms those take too long to change probably what will happen is if this stuff becomes popular it'll affect the first place it'll show up is in the file systems and so that's just kind of something to be looking at it will be really interesting in terms of power management what happens when this persistent RAM if it becomes popular if it becomes adopted okay so with that let's talk about the C workgroup projects so this last year we just had one that I really want to highlight and that's the MCC tuning guide I already talked about that a little bit we did just and I already had a slide on this I'm not sure why that's in there twice we just had our open project proposals we had about 18 proposals and we discussed them this week we selected eight projects to fund I can't really announce them yet because we haven't gone through our final approval but hopefully we'll select some projects this week to finalize and fund and so one of one of the projects I think I safe to say that we approved was actually some funding for some device tree documentation so we're actually trying to roll up our sleeves and help with some of that stuff the other kind of another major project that we're working on is called the long-term support initiative it's a kernel that is kind of geared towards having it's based on the LTS the community long-term stable kernel but it's got a couple of extra things that the industry thinks would be good to integrate into the tree so there's some pretty rigid rules about what you can put in a long-term stable kernel in from the community standpoint pretty much only bug fixes and in the LTSI tree there's a little bit more flexibility there and it's really intended to give the embedded industry as a whole a common kernel to work on that's that's held for a little bit longer in terms of stability so the kind of the news here is 3.4 has been available for a while we held some workshops in Japan and talked about a testing mission there was actually a presentation this morning on that a white paper was released recently talking about the value of that and it's pretty clear that the next well the LTSI is always based on the current LTS so the net 3.10 was just announced by Greg Carl Hartman as the next long-term stable community release and so we'll be basing the next version of LTSI on that so some other stuff I have more slides I'm going fast just in terms of tools a couple of things there's a core dumper that I saw at ELC that I thought was pretty interesting I've done this type of thing in the past doing crash dumps on embedded platforms as a bit of an art and there was this new tool that I saw at ELC that talked about how to do a sparse core dump you can't really do a full core dump you don't have the memory for it and so if you if you're looking at crash dumps and the kind of the issues around that this is a good presentation also a good presentation on debugging techniques by Kevin Dankwart in terms of testing frameworks we have a lot of testing frameworks C work group is looking at funding some more work around the LTSI kernel for testing there was a good present actually a birds of a feather session by Matt Porter and we're always looking for input if you want to provide input to us either get on the LTSI dev mailing list or the C Linux dash dev mailing list that's where we kind of hold our conversations about these things build systems we have an embarrassment of riches I've been working with embedded Linux for 20 years okay and I know some of you are probably shocked because I don't I look like I'm only 30 right no no so and it used to be the good old days well I get the bad old days I guess you used to have to compile GCC by hand on the command line and it was a royal pain and now we have all these tools available it's it's like the golden age of embedded Linux it really is we've got the Octo project which is this very big high-end system lots of documentation lots of support we've got build route we've got Android and there's a lot of things people can be doing with Android even if they're not using kind of the upper upper parts of the Android stack so we've got an embarrassment of riches for build systems in terms of distributions we've got Tizen out there we've got and that seems to well you know I'm not sure where it's being positioned it looked like it's maybe heading towards automotive but I know Samsung still is interested in using that on smartphones as kind of a hedge against their Android bet they've got Android use in you can use Android in non-ce and Kareem often talks about that and Yachto if I if I was to be position some of these so Yachto project I don't know what they call the distro that's inside it whether pokey is kind of still a build system but pocky sorry yeah pocky so pocky is kind of the new in-house distro that you that you kind of I was going to say mangle that's not the word that you manipulate yourself that you change stuff and then angstrom is actually a really nice kind of package distro so there's a package feed you can get new binaries for it it's kind of like the embedded desktop OS where it's really easy you can get new binary packages and it's very common on development boards the Beaglebone comes with angstrom a lot of boards do so we've got a variety of distributions in terms of resources we have the elinx wiki there's it's a lot of the information on here is stale that's kind of the nature of wikis well it's this wiki but there's lots of information out there and you often see references in other people's presentations to specific pages out here either about power management or especially boot up time or things like that there's a project if you want to get involved with the video transcription project we're trying to get we try to have references to the last eight years worth of ELC talks we've got as many presentations as we could gather up there including links to a lot of the videos that have done by free electrons and and other folks and we really we'd like to try and transcribe some of those to make that material very easily available miscellaneous i want to talk about kernel community civility that was kind of an issue that came up embedded contribution status and some hardware so there were some complaints and over the last couple months about how whether or not the kernel developer community was civil enough there was a lot of discussion in the end i think everybody thinks it's a good idea to be as civil as you can some people are saying well you kind of need to be harsh you don't want people to get the wrong idea you don't want to be vague about your rejection and sometimes rejection is needed to to help people do the right thing but it's being discussed at the kernel summit i think this is overall i think it's the trends we're seeing the kernel mailing list is a lot more civil than it used to be trust me on that one so i think we're in a pretty good shape here and i think it'll continue to improve hardware one of the things i thought and this is kind of unfair i just kind of picked a couple of things that were interesting to me personally the intel quark processor was announced at idf this year intel developer forum and it's a power efficient 486 it really shows that intel is diving low they have a galileo board that is not out yet i think it's supposed to come out early next month that is arduino compatible so this intel targeting the maker market it's a signal of intel getting into the low end and i think we're going to see a lot more stuff with linux being targeted at the internet of things and so you know the next couple billion devices should be running linux that's kind of our that's kind of i would like to see that happen also the apple has this m7 and this is not related to linux but this is a trend in the hardware is that we're seeing separate processors being used to offload specific uh functionality especially in mobile devices and this is primarily for power management but it's also to enable things that you wouldn't normally have so even if your phone is in full standby uh on future apple phones you'll be cattle you'll be logging location information ostensibly for the purpose of serving the user but uh anyway but that's interesting i mean there are there are interesting things that users can do with a full location data right and so um i think it's an interesting trend well i think we'll see more of that type of stuff um in terms of contribution status uh embedded companies are doing more and more contributions to the linux kernel uh in the if you look at the lgn.net and the top 3.11 contributors you see a lot of soc vendors you see a whole lot of linaro out there um and uh so this is really good if you're just now getting into contributing to the linux kernel there's an excellent um excellent document i found talking about the mechanics of contributing your very first patch to linux highly recommend that i think it would still be good for us to continue to publish best practices for companies um and there is still this problem with what i call version gap where you know there's a whole lot of companies uh my company included we're shipping 3.4 on most of our cell phones and uh it'd be a lot nicer if we could be shipping 312 well we won't you know we will we'll never be right at the top of tree when we were releasing products because there's a qa cycle and all that but it should be nice to be closer to the top of tree um and so that's always something uh so maybe device tree will give us the stable api we've always wanted ha ha um okay and then this this is what i wanted to kind of leave time for so i have this new thing and this is where i i'm going to turn it into a little bit of a birds of a feather session which is the best of so what i want to do is i want to hear from you uh at least in these two categories that i've chosen what do you think is the smallest linux system that's an actual shipping product so i found one and then also the fastest booting so i found a product and i didn't even try very hard so this is me being lazy the tp link mr 3020 is a wi-fi hotspot hot spot it ships with a four meg flash ship it's got 128k u-boot one megabyte partition for the kernel and a 2.8 meg root file system it ships in 32 meg of dram and i know the 32 meg that's like oh come on tim you didn't even try hard there's got to be a system out there that's running in eight meg of ram or four meg of ram so does anybody know of anything that's actually smaller than this okay what do you got tell me about it oh it's the smaller version of that what what's see i i didn't even do research well okay what do you got back there okay memota how do you spell it okay okay what was it again efm 32 if just give me whatever is googleable okay so i'm gonna look those up and i want to continue to kind of improve this and uh kind of have like a little not a contest per se but just you know let people know well what's the smallest we're doing how are we doing on the smallest okay so the fastest boot so these are not shipping products so supposedly uh i think i think these may have been kind of the same effort uh you can boot a beagle board in 630 milliseconds uh that's pretty impressive i don't uh when i did this i kind of knew whether that was going all the way i know it's going all the way to user space but i don't know if it's getting actual video application up and running um monovista a couple years ago touted touted this dashboard boot in less than a second does anyone know if that made it into product if if cars have this sub one second boot okay i need to i need to keep looking at that does anybody know something really fast i believe it was it was in the chevy volt is the chevy volt running monovista oh that's pretty cool okay well i'll check that out okay a Volvo with an old map for oh how come all these cars i guess the cars are okay okay i'll look at that okay well that's cool okay so we have systems out there booting in under a second i don't know what's android's problem is okay so uh resources so this is where i get my material from uh and uh lwm.net i just am totally ripping them off so if you are not a subscriber to lwm.net please subscribe i plug them this is a there's a great resource um and even if you don't need the information you should throw a couple bucks their way in terms of the kernel releases uh kernel new base always does a really good page in addition to the lwm.net pages and then the slides from this and all previous elcs and elc europe's are available on the elinux wiki so and there's a ton of information if you find yourself wanting to find out about some topic uh you really should go back to these slides i don't know how googleable it is um i don't know if google picks up a lot of times on the searches but it's worth kind of scanning through and finding the slides uh the ce workgroup a lot of the discussions we have like about these projects that were proposed happens on the ce linux dev mailing list uh and then the lexicon japan slides just a couple of months ago that's where those slides are located um overall the status of the industry is very healthy uh a very conservative estimate uh is that over 1.5 billion billion devices have shipped with embedded linux and this is a absolutely a conservative estimate uh if you look just at the number of android phones it's above 900 million and if you look at all the tv's all the digital cameras all the routers uh we're well over that but it's really hard to get the actual numbers on on those individual categories so we're still going strong so we used to joke we used to joke about world domination that used to be the joke we did we don't joke anymore because it's kind of it's kind of rude once you've done it um so that's all i've got uh thanks for listening we've got time for a couple of questions okay are there any questions i left time for questions and i'll feel really bad if nobody asks anything because i went so fast oh uh there is up here we could pass it back so you said the the processes of the linux kernel are are good and healthy and i i agree yet to recap my talk earlier i think we there is a something to be aware of and that is it's totally great that a lot of companies are now contributing to the kernel but the way we get new code and patches does not scale with the amount of review and maintenance we have and uh hearing that the the industry is healthy is nice to hear and i think the next step would be to see that assuring the quality of the linux kernel by maintenance and so it's not nothing to do anymore with what a developer developer does as a side project next to his work but to recognize it as an independent job and and putting some money into it that would be really good to keep this great quality of linux we want to yeah no i totally agree with that and that that's actually that's not a new problem right so it's it's uh always been it's always been hard to get uh the amount of review you'd like on on stuff and it's especially um in my opinion it's especially more difficult because a lot of the soc vendors are relatively new at contributing and uh my experience as sony is that the product engineers they're on a treadmill they're on a deadline and they're usually not the ones that the company's gonna have work on open source there was a really great talk by andrew mortin a couple years ago uh about how to structure your um kernel development team so that you've got some spare resources off to the side not that are not on the treadmill uh that can contribute to the mainline effort and i i agree with you i think it would be really good for companies to do more of that to make sure that they have dedicated people helping out on on projects and review and that type of thing and i think we're going right now especially with the um with all these contributions we're seeing from soc vendors it's a real big crunch and it's a significant problem so i i don't have an answer but you know i talked with you there you go yeah um that study you did on the flash file system is interesting yeah yeah sure okay um so the study you did on the flash file system is interesting and i was wondering if you'd ever plan doing one uh which would actually measure the wear leveling of those emmcs um i'm trying to remember they did there was a robustness component because i remember reading it and they said well we're not going to test this is too expensive to like or something like that yeah i don't the wear leveling you have to run them for a long time right to to see where they because there's no way they're black total black boxes absolutely you and so you have to run them till they fail and so that's the whole point so um i don't we don't currently have plans to do a follow-up on that study so all right okay so i'll move for two okay any any other questions okay there's one in the back um with regards to test systems and frameworks etc i understand that there's a new kernel test framework that intel um have invested in um that runs tests on each kernel commit etc builds it and runs it as it build and it provides detailed output uh on each commit you know i yes i i is it worth i mean is that an option for discussion to find out optimizations that could be done within the kernel and yeah i don't there is something i i and it was discussed at the last kernel summit does anybody here remember i don't remember that much about it there is something that's running i think on every kernel commit or at least on the you know the stable ones that it can build systems it's called zero day okay okay do you know if it is it doing arm in addition to like intel hardware or no it's not okay okay oh maybe we should get this i'll repeat so there's something called zero day um and it does do builds on multiple trees multiple configurations i'm assuming i think that's what i heard at the last yeah it does multiple different dev configs for multiple different architectures but as far as i know it's only booting on intel hardware but it's extremely fast sometimes you get a report if you're a maintainer you get a report that you broke before you even get your pull request out which is really nice but yeah so that's that's uh run by uh folks at intel so what about uh linaro is i know linaro has a test effort and does anyone qualify to talk about the status of that or being that i work for linaro maybe i shouldn't talk about kevin's done some work and it's working really well and he's gonna do some more for arm uh linaro has a whole test framework that currently has been mainly focused more on the linaro releases which hasn't been very useful actually it's not useful at all for for upstream maintainers and so on the goal is that this the lava thing you mentioned is going to get to a point more similar to what zero day does with automatic bisecting and all these types of things because well one of the problems with zero day right now is intel has basically said it's gonna remain closed so that's you can get output from it but you can't actually use it so i'm not sure if the future that but so anyway linaro is kind of doing something uh on a similar lines separate from linaro the olaf and myself as the rmsoc maintainers we've been doing basically automatic build and boot testing for a pile of different arm platforms um so we're just doing we're doing that for for mainline for next for rmsoc and a couple other trees right now so between the two of us we have you know i don't know 20 or 30 different arm platforms that are getting built and boot for for all those trees whenever there's a new commit or whenever there's a new branch that comes out for those okay so it so from the c work groups perspective we're just on the very uh beginning phases of uh looking at doing some automated testing continuous integration testing for the ltsi kernel and i don't actually know yet what the types of what types of tests we're going to be doing and in terms of whether they're performance regression functionality tests but there is high level of interest within the work group the member companies to do some stuff there and including making the testing infrastructure we develop available and i don't know right now i don't i don't think we've decided what we're building it on yet it may be based on lava or so the goal for lanaro is that lava it lava is completely open and the goal for lava is to be that framework so okay so the stuff we're doing is rmsoc maintainers is much very much short term as the lava kind of gets into gear and gets to a scalability point to do all these things in a much more open way okay okay uh are there any other questions okay what okay i'll put my big little that's supposed to be duct tape by the way on there let me see where is it somewhere was it did i miss it did i go past it tracing oh wait let me see it's back it's before or after okay it should be under pm oh okay memory there it is so yeah and i'm not casting aspersions on any brand of cars or anything like that yeah that would be the internal switcher metaphor uh well it would yeah i'll have to think about that okay thank you very much