 On your phone turn off the sounds on your phone. You probably bought like 14 other things attached to the Wi-Fi Which is why I'm on 3g right now. Thanks so much You should probably turn those on silent as well This will be video recorded. I can't say videotaped anymore even though that's how I learned how to do things this beard is gray So we only have this working microphone because it's an all-volunteer event Please donate So during the Q&A session you'll have to raise your hand speak clearly without the assistance of modern technology The speaker will repeat that for the recording for all the streaming people who just got lost and decided to stop at a Starbucks and connect To the Wi-Fi so question repeats Answers learning everybody ready Wow, you're not even awake yet that coffee hasn't kicked in is everybody ready Thank you Yeah, you don't have to be excited about me you should be excited about our first speaker Whose name got removed because somebody wants me to take an update while I am traveling. That's rude Thorstein Lemus is a Linux kernel development follower. He's been starting since the start of the century turns out He's known for his kernel log column Which has over the years covered the features of more than 70 new Linux features in detail as elaborate and respected article series Published by his employer the German CT magazines You're as big as computer and tech magazine both online and in print If you don't know what print is ask somebody who's got a beard like mine He used to contribute to the Linux kernel But has put that aside just so he can cover and come to conferences like this and talk about open source Please give a round of applause for Thorstein Lemus Yeah, welcome everybody. I haven't used such a mic for quite a while So if I don't hold it right just let me know Yeah, but let's go here. It's the 20s first time already and I think I'm when giving a keynote That's a big biggest chance to really give a thanks to the organizers and all the volunteers that make this great conferences happen So give a round of applause, please Yeah, thanks for all this work Warning this talk is kind of part of the history talk, but don't worry. It won't be a history class I promise I make sure of that and everything I mentioned is kind of relevant for today And there will be a moral of the story at the end. So let's get started the stage What if the first foster I'm actually happened into two thousand one? Shortly before that Linux 2.4 had just been released I basically had all the important need features needed back then Kind of everything needed to conquer the world and bring us where we are today POSIX support graphical interface It was portable to various arc arcs and the performance was good Since then a ton of improvements have been merged into the Linux kernel and I can only mention if quite a few of them Because otherwise this talk would take all day or even two days Maybe and one of the things that happened Linux was growing up 2.4 for example would likely not run very well on today's computers Obviously because lots of drivers would be missing, but one of those things that wouldn't work is Would be problematic would be the number of CPU cores in modern systems because that back then uniprocessors were quite the norm if you had an S&P system with two processors that was Unnormal back then but today we have CPUs with 12 or 16 cores actually and even smartphones have four cores and that might be way too much for Linux It was actually S&P capable since June 1996 already But back then that was kind of brute force how it was made possible. There was a big kernel lock That's basically that only one CPU core was allowed to to execute kernel code at a time So I had two programs running in parallel and both wanted to call into the kernel to do make it do something They had to wait for the other one to finish That are actually was solved with finer-grade locking that followed in 2.2 and 2.4 So the next two or three years and that made Linux Scale a lot better already. So in the 2.4 Days it was wasn't that bad anymore But other unixes were known to scale a lot better The Linux developers actually continue to work on that and by 2.6 Linux had got a thousands of finer-graded locks Yeah, but the big kernel lock was still around Desperate even if some people had assumed it would be gone by 2.6 already I looked it up actually 2.6.6 had about 500 locks big kernel lock calls back then and it as it turned out many many more steps were needed and had to be taken and To get rid of the big kernel lock So Networks acting up here If you're interested in all of those details, which had to be done You can look some of them up on on on LWN nuts, which is a great size Sides that report about the happenings in the links kernel all the time as you can see during between 2004 and 2011 when the big kernel lock was removed There were lots of articles that explained where all the problems were yeah And as I mentioned it Linux got Rick Rick of this big kernel lock that was was a problem for scalability in 2011 So basically after about 11 years and that was possible thanks to a heroic efforts by various developers Especially we have to thank Aunt Bergman for that Who actually took care of the last few steps that are really really a lot of work without much gain maybe and yeah But yeah, it was quite a game But it was really hard doing this this last things because they weren't odd odd drivers and things like that But scalability since then actually remains something being worked on if you look at LWN that again There are many many more articles over the past few years that Were about scalability and many small improvements happened over time It's kind of a never-ending story like recently there was some memory management optimizations There was a new schedule load balancing core that got merged into the Linux 5.5 kernel Which is the latest right now and many other things. Yeah, and most people most of you that don't You that are not currently developers don't notice any of this because all those changes happen basically under the radar But thanks to this Linux is and stays one of the best scaling OS kernels these days Maybe it's even the best I was a bit careful in writing that so I went for one of the best Maybe I was over overly careful there Yeah, that's one of the things that happened in various small steps during this past two decades and And getting rid of the kernel lock was really one of the big first achievements with with that small steps that happened And that was possible because and that was something everybody worked together on it's kind of common goal But more often there is some competition between the kernel developers and and one of the thing that Linux the kernel was actually still missing in the early first them days was built-in virtualization capabilities and that's Not unusual because virtualization on x86 Linux was not very popular or not famous back then that it only happened during the mid-2000s with Zen some of you might remember getting popular in 2005 and x86 processors actually starting started getting virtualization capabilities back then Then back then actually looked like the obvious and fitting solution for the Linux world There was no no Competition there everybody sees part that would be what we what will make that what everybody would be using There was just one problem support for it for running For using Zen as a host or a guest with Linux was out of tree But a lot of people didn't care that much and the other problem was a Zen was actually kind of a kernel underneath the Linux kernel Then suddenly out of the blue in October 2006 KVM came much already half a year later into the Linux kernel because it was quite small it was quite small and being compared with Zen actually didn't offer that good kind of feature set the performance was worse It had less features and actually required a CPU of virtualization capabilities some sometimes it looked a bit like a toy But in the end it turned out it was quickly improved because various people and companies found interest in it and Help with little steps making KVM bet better and as some of you might know in the end KVM turned out to be a game chaser today It's basically used everywhere these days and actually one of the cornerstones for what made Linux rule the cloud Zen is still around the support for running as host and guest with Linux actually got only much five years later in the two dot zero days and These days then is quite small when compared to KVM. So why did KVM succeed? Yeah One of the reasons that might be a factor might be that Zen sources didn't get the code upstream fast enough But I think the real reason is KVM had a better more flexible and future-proof design and actually one that allowed Linux to stand in control because it was built into Linux not underneath it and That's obviously suited Linux more and it's that developers so they become became interested in the helped making it better and Yeah, that's how it ended and That's why KVM rules the world these days Yeah, that's most kind of history lessons history lesson, but on the other hand, that's things like that happen every day Every now and then these days also For example recently there was a DB DK. That's a technique to make in network packages Go straight into user land Applications and bypass the linux kernel that obviously didn't suit the linux kernel developers much So they started to fight back and worked on solution the express data pass and xdp and Something built upon that's called if xdp socket where user land programs can can immediately get those packages so the kernel stays in control and It seems those two solutions actually can keep up with dbtk these days So it's likely I'd say if I would bet I bet on these two solutions and I think they will gain Territory back from dbtk Because it's better integrated in linux another example is as a corners I own So the program can do something else when it reads or writes Makes the kernel read or write data from some starch device That's normal in the windows world and but unusual in the linux except for maybe great big databases But these days something a solution finally shows up. That's called I owe you ring It was a margin to kernel a bit more than one year ago and It's an answer to a similar bypass technique. That's called SPDK and Yeah, and it's actually will help to getting the performance out of these new SS days because they are really quite quite fast and To make make the get the performance out of this Such an asynchronous IO Solution is needed because a syscall overhead is way too much otherwise and the thing is just as KVM both these solutions got Imported into the links call and started quite small and Then got improved in small steps by various people that found them interesting and it shows it that things like that had more all the time But realization was not the only thing The only solution to make linux a host for something there was something else That linux missed doing the early foster days there was support for containers and that's actually Quite interesting because other unix's back then supported them already free BSD for example did and so Laura's Learned it a little bit later for linux it only became famous in 2014 and a lot of people sometimes wonder why did it take so long Yeah, I took so long basically because the card collect features to build something similar with linux and Yeah, that's those features had to be built, but that happened one step at a time it took years some of these features actually We used today for containers were built were actually built for for four containers and Docker that like the various namespaces Some of those features we use for containers actually not directly When partly built for for contain I use like the C group stuff the control group stuff that make sure one process can't Take all the CPU or memory and other staff And that in the beginning actually was used a lot of lot with virtualization with KVM and some of the features We used to build containers these days actually were built for totally different use cases like capabilities and sec-comp and a few other things and the thing that made Docker popular in the end was actually And that could combine these and a few other features in a new and more attractive way And that made linux container suddenly suddenly quite popular and in the end kind of changed the world funny detail there Alex C was designed to become the preferred solution. That's a bit older And there were actually two other linux container solution virtual virtual also an open VSS That became quite small thanks to all the other things docker brought and linux V service the other solution that is basically forgotten these days They came early and used out of three packages But thanks to the kernel and small steps and these features you can combine in interesting ways they Docker overrun them basically and is now the leading thing Alex C actually actually is still around Definitely not as big as Docker And the funny detail there is just imagine if it would have been one company that had what have Invested all the money in bringing it making the corner Capable of running containers and building and Alex C as the userland processor on top of it That actually might have been a pretty bad return of investment for them because yeah other companies can use those features you brought as well yeah And that that really makes it a little bit risky for companies to invest that much money, but on the other end That's a problem for the colonel linux the operating system actually got a way better and more flexible solution Thanks to this and thanks to all the various small steps that were taken and What are the reason why we have so many small features that docker could combine in a new and more attractive way Docker already was a quite Unexpected but welcome to price and that's not the only time linux kernel stuff and kernel features show up. That's nobody really aimed for and one of those things actually is changing the kernel these days and Linux basically is is on a trip into the unknown right now What I'm talking about is an improved Berkeley package filter that's called B.B.P.F. for short or these days CBPF or classic B.B.F. and The improvements began in 2014-15 and it's an in colonel mini-VM Mini virtual machine, but not a like a virtual machine that emulates a different computer more like a Java VM where you can Upload in programs to and then the colonel executes executes them. That's something for 20 years ago TCP dump for example relied on to only get those packages from the colonel Into into user-landed into TCP dump the user was actually interested in that's needed because Needed for performance reason because coping everything over simply is so much work. It was slow everything way too much down This improved CBPF actually got called got called E.B.P.F. You might have heard of this thing Some people even call it just B.B.P.F. for short these days It's I'm kind of angry with the developers there because now we have an old B.P.F. That's called B.P.F. and a new B.P.F. and nobody knows what What somebody's talking about? Yeah That's really sometimes annoying, but most of the time if you read B.B.P.F. these days It's actually the enhanced one that's meant Yeah, it became a really fast and much more powerful VM to run Programs in kernel mode. That's actually if you had suggested something like that to Linus Torwas 20 years ago, I guess the idea would immediately have been shut down because it's kind of crazy But these days it worked and one of the reasons why it's work is actually it's got improved a small thing and then improved and improved again and That's the way everybody could be sure that's not dangerous or something the network developers actually built this E.B.P.F to scratch some itchers they had and Improved it and improved it again. This XDP stuff I mentioned earlier is actually built upon this and really Realize on it, but these days other kernel subsystems started to use it as well or will soon and more and more Seemed seem to be interesting actually on LWN that it's became kind of a running get gag that During the articles there's often the case where some some Solution is sought for Complicated problem and then often B.B.P.F is suggested to use it It won't be used all the time, but it seems it's getting into the kernel at various various areas And yeah, E.B.P.F is still not done I mean it's got developed over the past few years, but it's improved more and more and starts to change the kernel fundamentally and Makes Linux actually gets more aspects of a microkernel this thing what some of you might remember this big debate of micro kernels is a better design than the stuff that Model that Linux can really use which is more like a monolithic kernel but as I said Linux gains more aspects of a microkernel and That's actually what your biggest computer magazine wrote the German CT magazine It looks like that and if some of you now see see pump People giggle next to you to you. That's because yeah, that's what because I wrote that Yeah Maybe I should try that on Wikipedia write something somewhere and then change and give myself a source. Yeah But I'm not only a popular for the kernel reporting I do that for 15 years now I'm also did a few things in my spare time for kernel regression taking and that I was actually invited for the kernel Summit and so I know what I'm talking about And I'm not the only one that actually wrote this microkernel comparison LWN that actually mentioned it as well and there's another one another running joke Yeah, you laugh But this one of the core developers that made a lot of things happening you we all use every day sooner it it's We laugh these days, but maybe it will happen it Will it's maybe we stand here in 10 years and say here that's how it all began and There's also another improvement that's coming where the microkernel aspect was mentioned that's really fascinating to watch Reints to be seen what comes out of it. Maybe it's a we're at the beginning or already in the middle of a small red revolution that makes Linux more error resistant flexible and more powerful it remains to be seen Thing is most people actually don't notice because it's happening in a lot of small steps and without disrupting all features. So That can can can be You don't have to care if you're not interested in that Yeah, long-standing wishes, that's a different topic another area where Linux actually was behind on on the In the early foster days was a proper tracing solution to look into the system or specific program to look why The system is slow or why the program is slow and the famous solution in this area is actually detrace actually published in 2005 and built into Solaris and People for for years wanted something similar with Linux and they got something recently. That's called BVPF compiler collection BCC for short and BVPF trace and Those are actually called detrace 2.0 these days by Brent and Greg Which is one of the leading experts on on detrace At least according to Wikipedia. I know I didn't write that there Yeah, and actually those two new solutions can the 2.0 is Appropriate because BVPF trace and BCC actually can do more than the old BCC More than detrace. It's pretty cool. If you want to know more about this look at Brent's website I mentioned it here, and he also published a book recently. It's quite cool and gives a lot of details Into what you can do with modern kernels and and tracing now and yeah it took basically like 10 or 15 years to get everything into the kernel and the funny thing is That actually happened without a design that had something like BCC or BVPF trace in mind. They thanks They much thanks to evolution because the kernel developers actually build various building blocks over the past 10 or 15 years Sometimes with smaller or different goals you have the tools like ftrace trace points perf k probes and all those things and Then that those were one part of the solution the other part is eBPF there it is again and here then somebody combined those tool and and made new things possible And then actually BCC and BPF trace came out by some people and tata. Yeah, that's how we have detrace 2.0 these days Really great how things work out and without the design like you would have normally when you're building an operating system Yeah, something impossible Linux will soon offer and Really great and important your feature one almost and one almost nobody would have expect during the early foster days It will be real-time capable. So it will be able to use Linux for your laser cutter or for your Roboter and your industry line that manufactures Cars or what whatever because it's kind of Linux can make sure that the program that's controlling your laser cutters is always Called in time to react to certain events and that's really important for this use case and Yeah Back during the all if early foster days that was actually nothing people talked about yet But it was an idea actually in some people's mind of minds already Especially in the mind of Thomas Gleitzner who's one of the leading or the leading developer for the real-time Colonel patches and He actually recently last fall on the Linux plumbers conference gave a great talk about this the URL is here I also you don't have to write this down. I will upload the slides to the website I tried to do that before right before the talk, but the network was overloaded. So you have to wait for that Sorry, but it's but it's a great talk where he's looking back and mentioned a few nightmares all this RT development gave him and one part of the talk is actually he gave mentioned a few quotes like real-time people are crazy and I'm not going ever to get merged into the Linux colonel and things like that and For I know I spoiled a talk a little bit. Yeah, but these quotes actually are all from Linux to was himself It was most of them were from 2005 from a great or 2004 a great great Debate of if we're making real-time Linux real-time capable actually is a good idea Yeah, but the developers actually didn't give up They basically started working on it having this external patch set and got small steps into small patches and small into the mainline colonel and Yeah, that actually made Linux better for all of us even if we don't use real-time systems or Don't need them and the real-time patches hit quite a lot of us problems and skill scalability issues first and they fixed them and that in the end made Linux better for all of us and Yeah, the thing is RT developers actually had a lot of body blows and One of the worst was it basically like five or six years ago They had basically like 90 to 90 percent Of the route done what that were needed to get to make make Linux real-time capable But they needed basically needed money for the last few Last mile or the last five years as it turned out Yeah, and the thing is that was needed because lots of Companies that used the RT patches back then actually didn't help much with development and But lucky luckily and the RT people were successful. They Came to the Linux Foundation and I talked to them and they actually founded a project with a few companies in 2015 and yeah, thanks to this project the the main trip will soon be finished because the real-time capabilities are The biggest the most important patch actually is in the Linux kernel already the config option to enable the real-time support It's not exposed yet because they have few things missing the biggest thing is actually a rework for the print case stuff That's actually which what's doing your logging which you can see with the message and that's but that's in the work And there were some disagreements, but they were settled really recently and the patch was The new patch to make everything happen was sent to the Linux kernel mailing list for review recently, so it's likely this Will get into the kernel this year's and then make Linux real-time capable Describing all the steps the real-time developers had to take was would take also a day or two maybe Because if you go to LW and not and actually look what what problems were discussed over those years you Have many articles you can read And but it shows even crazy goals that look unreachable can be achieved in small steps and yeah, the thing is that's actually how most of the Linux kernel features evolve as They are often not designed by some some company or some some In a meeting or something often it are simply individuals that want to realize an idea or a dream or and and make Linux do something and Sometimes they even abuse companies to realize their ideas that happens They simply look for companies that might be willing to to to pay them And some of the developers sometimes have to find other places to to find money But it shows really with a good idea and commitment and even big and crazy deems dreams can be realized real-time Patches actually are a really good example for that Yeah, I mentioned a few times that Linux mail is working the Linux kernel is working differently a little bit and Learned nevertheless it learned a lot of things since the olive host them days But it took them quite quite long to make make To get those features realized But that's just how the Linux world is because you can't simply can't just hire 50 developers Make them work for two or three years to build a special specific feature like Sun for example could do with zones detrace or ZFS because if you do that there's a really the risk that after two or three years you get to the current developers and They say nay. Oh, no, we don't want that. That's way too big and you're doing this and this wall wrong and Because they want to see these small step increment incremental steps Because that work quite great for the Linux colon and that actually means more work for For the companies if you want to realize Something yeah, it's after Connell developers really well often lead to the best solution on the market, but it has disadvantages too So and now I'm going to check who's awake Who's hands up and leave it open leave it up if you're awake? Yeah, yeah, so leave it leave it leave it leave it up if you agree with standing here And is ZFS actually the most sophisticated file system in the Unix world? Yeah yeah quite quite a few hands went down but About yeah, not really 50% But but close but Some people think it is But ZFS and the Linux file system is actually one of those areas where a lot of people say Linux is still not Not the best Colonel for that because ZFS is better and the funny thing is there will work on ZFS for Linux actually actually was started in 2008 It's called butter this I guess most of you have heard about this But as most of you will likely know it's doesn't have hasn't reached that goal Yet, and it doesn't look like it will be any time soon if you go to Wikipedia We actually see a few features that are called unstable unstable on and a few Features that were announced or that ZFS already had that not even implemented yet So the big question is what took them so long Yeah, one thing for sure it was overhyped Just like all the other features Butterfest was merged into the colonel when it was still quite small and then improved in very various little small steps Yeah, and that's take take a lot of time as I showed with the other examples. I gave earlier and It also shows how quick things improve in complicated areas mainly depends on how complex the area is The problem actually is you will try to solve and how many individuals or companies Development and it turned out the problem scope here is really really complex and company a lot of companies didn't care too much Some companies actually helped like Oracle Sousa Facebook and few others But some don't care much and didn't help no complaint That's how it thinks sometimes is are in the Linux world So the big question is will Linux get something to compete with ZFS I'm pretty sure soon or later. It will the examples. I gave really showed that it might just take 10 more years Maybe just five but maybe 15 who knows we will see just to mention a few recent events there was this file system pcash FS that's a lot of people have high expectations there. I'd say be wait and see and Keep your expectations under control to not create another hype because history shows It's a hard problem the history of Butterfest actually shows It's a hard problem that takes a lot of effort and because of us right now It's basically a one-man show and not even submitted to mainland inclusion yet So it's unlikely to fly soon even if it got merged companies would need to have a Lot of testing and testing in a field before that's really become stable So that that will take a while if things really develop in this direction Maybe in in the end it turns out that but that's a Butterfest gets improved and that becomes a ZFS for Linux in the end Just as planned that nobody knows that Yeah, I talked a lot about the Features already, but let's switch gears a little bit and talk about Linux kernel development how the kernel is developed itself Because during the early first time First time days the Linux kernel development looked really odd to outsiders. There was no central development for like the sourceforge, GitLab or Github Development was actually totally driven by email. There were dozens of mailing lists no tracker for patch submissions No central issue tracker neither for developers nor users There were long unstable development phases. There were so new and new features that got built and Integrated into an unstable series Sit there long and reached users only after two or three years that was made a lot of people unhappy There was no predictable release cadence. No driver. They just beta a database where you could look up if your hardware support And how good and actually we had an overworked lead developer Because we didn't one of the reasons why he was kind of overworked We didn't even have a version control system by a version control system back then For the younger ones in this round. Yes, 20 years ago. We had version control systems already Most people most project actually used them for the older ones. Yes those were CVS and SVN and Maybe those were the middle ages back then. I don't know But git really made things a lot better. Yeah, but then back to kernel development There are I there were way more odd aspects for kernel development and the approach actually improved quite somewhat since then like we have gets since 2005 and Had me change the world really to the better. Thanks. Linus. It's a second project. That's quite made The world change And now I have your strange pop-ups here. That's blocking me my side. Yeah We actually got a predictable release cadence release cycle Since 2005 we basically get new releases from the Linux kernel every nine or ten weeks so it doesn't take that long to get a feature out to the users and This approach where every new version brings new features actually was called crazy by a lot of people when when the Linux Kind of switch to it, but it turned out very well and actually browsers actually later picked it up So Firefox and Chrome we all are used to this This model that's basically where the Linux kernel built the past to actually do that We also got stable and long-term kernels that are supported for longer like the latest Lot of those long-term cons are supported for six years these days But to be honest many of the odd things I mentioned a few minutes ago Are still around and some even got worse though. So these days we don't have dozens of mailing lists we have hundreds of mailing lists and Development is actually still driven by email. There's not there is a bugzilla But the thing is where you can report issues But the thing is there are lots of developers don't look there because in most cases It's not the office official way to report bugs Just a hint for those that want to report bugs to the kernel developers The proper place and most of the time is a mailing list Security also became much more important since then But still we have no automatic Co-checking in a central place most of the subsystem maintainers use something and the developer some of the developers also But nothing central here a lot of room for improvements here and on some of those things actually is Someone is working on them already There's always the idea why not switch to a central forge like github and github for development Because you would get a lot of things for free them But no that won't have many times soon because just like with features the developers Demand that things are improved in small steps here too because yeah, that's How it works well for them and yeah, but that's really needs someone that's motivated enough to do that without an immediately Return of investment and that's makes it sometimes a little bit hard and yeah, that's why some of those things are still Kind of archaic with Linux kernel development It becomes more and more of a problem There was also recently on a plumber's conference and fall talk about it with with what Problems a lot of developers have to deal with these days Thanks to that work for group was actually started and Already got to work. So a few improvements are coming. There's now a Garrett instance which developers can use to submit patches to the Linux kernel Rains to be seen how The other how many of currently developers will start to use this and how if they liked it But as you can see its Development is improved with small small steps just like with features and yeah, that will take some time Sometimes people ask her and Should why can't the Linux foundation help you more? Maybe it should help a little bit more, but I'm not sure how much because Linux development mode model really worked well And I don't think it would be a good idea to basically Organize them like open stack or Kubernetes are developed with lots of committees and hierarchy and things like that Yeah, that might might not be the best for the links gone Nevertheless Linux kernel development meanwhile runs at the usual place pace Every we get new kernel versions every nine or ten weeks for many many years now if you're interested in a few few numbers We are actually each of those versions brings about 13,500 commits sometimes it's a thousand or two thousand more or less and All of those bring about 100,000 new lines every version so about every year the kernel grows about 1.5 million million lines per year And that actually happens about 15 years after Andrew Martin who was back then number two in the hierarchy wrote this and Famous last words, but the actual patch volume has dropped off one day as I said that was 15 years ago It didn't back then we had like I think six or 8,000 pet Comets every kernel release as I said these days we often how I have 13,500 and the Latin later number is actually stable these days So the patch volume is quite constant there Yeah, but that was not the only thing Andrew Martin wrote back then. He also said we have to finish this thing one day Yeah, I don't think that will ever happen. I guess Linux might be forget forgotten Maybe in 100 years and because all developers might work on something else or we still use it in 100 years who knows remains to be seen So now I'm coming to the end slowly summing things up Now the Linux developers really solve big problems and small steps the big con lock showed that the small steps lead Really to better and more flexible solution like KVM did Sometimes these small steps actually make ground-breaking new technologies possible like like docker did the small building blocks that are built in the small steps actually sometimes can Can even help fulfilling all the wishes like detrace to did for the wish for a tracing solution and this process actually can lead to quite unexpected disrupting Results like this BBPF thing. I mentioned really keep an eye on it. It's going to be fun. What's coming out of that? But thing is that's what made and makes Linux so great because that those were just the big examples during the past two years at two decades The big features that were developed like that. It's basically all the other stuff was was quite similar but it shows It's what makes Linux so great and Reaching big and shows that you can reach big goals with small steps and Even if that takes time. Yeah, and if something takes time it obviously Also needs money because we all have to eat in the end of the day and Yeah, so to realize them that often needs someone that's really committed Ideally someone that's an individual what that wants to realize a dream And has a dream about a new feature and that And drive makes everything happen to realize that in the corner because that's How all those features I mentioned to got developed and the real-time stuff I mentioned is really was a really big and crazy dream, but even that got realized this way Nevertheless in some areas we are still not there yet to improve things You have to basically become an individual that is committed and find money To get this dream realized and then maybe Linux will get the file system. That's As good or even better than set of s and we might get developer tools and and schemes that are even better than what what we have these these days just like it was for example or other things that will have a positive and Impact on the world or the the world of free and open source software like Linux and github and have which just everything else I mentioned is what just a dream in somebody's head that Somebody realized in this case linus that's it and That's if you're wondering slide 234 all those like I have so many things to say and So many things in my head That's why I'm using to the slides to keep on track. So are there any questions. We now have to switch mics All right, if you're Okay, if you're planning to leave You know this makes questions a little bit difficult. We have about five minutes left for questions if someone has a question I now can't get to you because If folks leaving the room could leave the room as quietly as possible So we can hear the intelligent discourse from the questions in the audience Thank you for a little talk Improving the kernel to make use of quantum computers and quantum chips that are coming up quantum chips Can somebody repeat on quantum computers? Yes. Thank you. I Have no idea. I don't think anybody is working on on making linux run on quantum computers Maybe we need a new operating systems for them Remains to be seen maybe somewhere and somewhere some lab somebody part of linux there, but I have no idea By the way, if if you're exiting the room, could you please do it from the middle or the back? They were still trying to actually answer questions at the front of the room at the moment Thank you for the talk. I wanted to know do you think but Linux has gotten faster or Less fast during the time during the 20 hours. I'm sorry. I don't understand you. It's too loud Could you maybe come here? Oh, yeah. Yeah. Oh, yeah, thanks. That's now again Do you think that the linux kernel has gotten faster during the year because you know a software grows? It's sometimes slower There was many Performance announcements. So do you think it has gotten like slower or faster on the same hardware? Oh You mean you mean if If it should be if it should be how it could be faster or what I Like during all the time that passed all you talked about do you think that the kernel has gotten faster? I I still out here. Sorry. I can't understand it. Okay, we're gonna try this again. Does anybody have a question? That hopefully isn't oh, all right. Well, did you ask if the linux kernel was growing fat or what? And the question is whether Linux gets faster with these updates if Even faster release updates When linux get faster with all the improvements, yeah, it linux basically gets faster Basically with every new version always in little little steps also is but is it too slow in some some areas or? Where do you think it's too slow? Do we I think the point was with the increased code base it might get slower over time If can you repeat it with they increase code base over time it might get slower because you have too many lines You mean because it's getting slower because it's so big not necessarily right now, but can it be in the future? Yeah, the linux kernel is quite modular So you can build everything into it what you want, but sure some of those features have some overhead that make linux maybe Slower and that's actually likely would become you do this overhead would be a problem if you would use A modern linux on a quite old system, but on the other hand you can configure linux quite Quite modular and I mean it's used on a lot of embedded systems So it shows It can be made quite small and at the same time also work on quite big servers with hundreds of CPUs Yeah, this is our this is our last question last question. Okay My name is Tolek. Sorry. I speak a little bit English. I would like to ask you about If I understand you right you said that each year we got 1.5 million lines more Just your opinion what's happened in ten years So what was who tested it who tested it? I Can't hear it. So is it the speakers are away from here? Could you explain me how it's possible to test it at all? How to test I think that Intel Intel Doesn't care about commit from IBM or Facebook or another company. Yes. Yeah, it tested at all No, no, I guess nobody tested all because you can build it in lots of different ways That's why I have a lot of issues What is this why they have a lot of issues? Yeah, yeah It's a lot of people try just testing the linux kind of throughout Lee But you can't test it all and that's also something that really needs to be get better Which the workflow group also I think has has On its roadmap to make sure the current testing gets better Yeah, okay, I think we are Ending this if you want to give any feedback just talk to me even if you didn't like it. Yeah, and if you want to follow me You can Follow me here if you want. Thanks again and have a great first time