 The development of 4.5 catch up, the main development phase and the merge window ended 11 days ago, so thanks to that I can already, it's already possible to see which features the next kernel version brings and actually you can quite predict when this kernel will come out. It's likely the March 14th when Linux 4.5 will come out. So what's new in those kernels? Linux 4.4 brought enhancement for a driver that's called VATIO GPU and that enhancements make something possible that's called virtual 3D. Basically you can use GPU acceleration, so 3D acceleration in a VM. But before you conclude, you can use Windows with 3D in your VM now that is not possible just for Linux guests. So if you're running GNOME 3 or another and compositing desktop modern in your VMs, you in the future will be able to get GPU acceleration in them and everything, the graphics interface might be a bit quicker thanks to that. Actually it's not only in the kernel what's needed for that. You also need a new QMU version and the latest MISA and there's a lot of fine tuning still needed. I guess the next Fedora release might be able to, you might be able to use the feature there, but there are a lot of, a few other things for coming to make it easier to use and so that might take a little bit longer until it's simply usable without fiddling in your system. What else? 4.4 brought a new graphics driver that's called VC4. That's actually a graphics driver for the different Raspberry Pis. You might ask why do I need a new graphics driver for that? There are already drivers for that. The thing with this driver is he does most of the things on its own. The drivers you used to use in the past or currently on the Raspberry Pi basically hands a lot of things over to a proper driver in the firmware and that makes a lot of things harder and unflexible sometimes and the new driver is like a real Linux driver. It doesn't support 3D acceleration yet, but we'll get back to that later. For the sys admins amongst you, what Linux 4.4 brought was a locked support for MDR8 for 5 and 6. That's something you can use to prevent rate corruption when the system crashes while writing a new stripe. That's actually a bit similar to what's used in file system. You write something that's called journal. You write out what you're planning to write and then you actually write it out for real and if the system crashes in between, you can do the transaction from the lock or get back to the old stuff. You actually need a separate drive for that so even more drives for your big rate setup. But if you're caring about data integrity, that's important because it fixes the right hole where the rate might get disturbed if you don't do that. What else? There were some improvements to the TCP stack and the locking. What the developer wrote is two years ago it was possible to do about 20,000 synths per second, so the part of the handshake. You do when two systems connect and thanks to the new improvements over the past few years and what got into Linux 4.4 and with the proper software and if you use everything that's possible, it's now possible to do 6 million synths per second. So write an achievement over the past few years and that actually doesn't only make things faster. It makes distributed denial-of-service attacks also harder because you need way more systems to bring a certain server down. That were some of the most important features from Linux 4.4. If you want to know more details, you find them in the net. Currently we have a good overview at LWN and the German stuff on HeiserDE for those that speak German. So what's coming for 4.5? There actually comes the Raspberry Pi driver again. That's getting 3D support in 4.5. So everything needed to do 3D. The actually 3D driver is not part of the kernel. That's always part of Mesa and that driver is also in the latest Mesa release already. So basically you have now a few a real open source graphic driver stack for the Raspberry Pi. But that still needs some time tuning. So if you just want to use it, you might be better off to simply use the driver stack that most of the Raspberry Pi use these days. But in the future, this driver is likely to make a few things better and more flexible. For the gamers out here, there's power play support coming for a driver that's called AMD GPU. This driver supports a lot of recent on current AMD Radeon drives. And this power play support makes reclocking and power management possible on those cards. Precisely here are a few Radeon cards that benefit from that. Those cards in the past simply used standard base frequency and couldn't switch to the quickest or most power efficient settings because there was no reclocking. So thanks to the power play support, we can now switch to the fastest mode and get more 3D performance or switch to lower modes and the power consumption is not that bad anymore. Actually, improved graphics performance, but it's disabled for now. So if you want to use it, you have to enable it in your kernel config. It's as far as I know enabled in the Fedora-Rahat config and in my own kernels also. But that's not enough. You also need to boot the kernel with a special parameter that's mentioned here. That will change over time. Then it will get enabled automatically so you don't have to care anymore. It will simply work. That's actually the plan. Maybe that even happens for 4.5. That remains to be seen. Another new feature in 4.5 is something coming from Android. It's a software destroy function that makes it possible for the admin or network managing software like the network manager to terminate open TCP connections. If you want to do that, the idea behind that, if you leave your house with your notebook or your smartphone and you get out of the reach from your wireless LAN and switch to mobile data networks, a streaming doesn't work, stops working normally because the TCP time out time is typically one or two minutes. So until the application that's displaying your stream notices that the TCP connection that was used until you left your house isn't available anymore, the buffer is already empty and you get a black screen. Now the network manager can tell all applications, hey, this connection closed and they can re-establish a different connection and together with a big enough buffer, there's no interruption in your stream. A bit low level, but maybe some of you might be interested in there's a new C-group interface that's called C-group version two that actually is a basically revamped control groups feature and the old C-groups feature that is used to limit how many CPU time or network bandwidth or storage IO you can use. So resource control, it had a lot of problems. You don't notice if you're running it just using it via KVM or in your system but if you're programming it there are a lot of things where everything is not that good and that's fixed by the C-group two interface. System B will support it so most of you will likely not notice that things will simply happen in the background but a lot of things should get better there on that level. For the embedded developers around here but also it might be interesting that the ARM multi-platform support with 4.5 gets mostly finished that's a feature to allow what's normal in the x86 world is like you compile one kernel image and boot it on a lot of different systems that wasn't used to be possible in their ARM world and thanks to the multi-platform that's possible these days so you can compile your kernel for a Raspberry Pi or a Banana Pi and a few other platforms and really use that kernel everywhere. It's not perfect yet there are still platforms where it doesn't work but this thought that took like five years is now mainly finished all the important platforms were converted for some time already and now with 4.5 even the minor and older platforms get multi-platform enabled that was part one and now I need to get some oil for my voice I have three scars here maybe one question between each part if the new driver improves the performance actually the driver is so new I guess for now it doesn't improve the performance but in the end that's the plan remember when we get to the end of part two you get a scar so next up part two the important changes in the past 12 to 18 months one of them is the EBPF so a four-let acronym instead of a three-let acronym this time it's an extended Berkeley package filter and so you might wonder what is that that is actually what you might know is the Berkeley package filter and that's what TCP dump uses when you capture frames from your network then it actually makes creates a small program that tells the kernel hey I'm only interested in these type of network communication and then the kernel only hands those packages back to TCP dump that prevents quite a lot of overhead and makes some analysis only possible because otherwise there would be so much data it would slow everything down and in the past two or three years BPPF was extended to be a simple flexible VM VM in this case actually means abstract computing machine like the Java VM not a virtual machine like KVM or Zen or VM we are able to set up and you can use this abstract computing machine to make some things more flexible and improve their performance and that allows a few new things and for example the network traffic control with TC for example improved thanks to that the performance monitoring actually use it for some filtering already because during performance monitoring there are so many events what's happening on the CPU it's important to let the kernel filter what you're interested in and only hand back that to user space otherwise there would be so much data it would really slow everything down and actually that might in the end finally make some of the detrace features available on Linux like easy performance tracing performance monitoring for your apps so what else changed there's a feature that UEFI ESRT it's actually a feature where your distribution can put updated firmware somewhere save it in the firmware itself and then during the next boot it's automatically update and that actually makes firmware updates as simple as updating every other software it's actually supported in Fedora 23 already but the feature also needs to be implemented in the hardware and the first systems that use this feature are getting out now remains to be seen if that's something all of us will be using in two or three years to update our firmware in our notebooks, servers or wherever another feature that emerged over the past few months was user fault FDE that's a feature where it can which is used by QMU where it can simply live migrate a machine over to a different host and without copying all the memory over immediately so thanks to that if the machine was transferred to the new host it can need some memory that wasn't transferred already and can simply ask the old host hey I need this memory now can you give it to me and that kind of sounds like it's a bit strange but actually makes live migration possible in situations where the memory changes a lot otherwise it wasn't possible to do live migration without a long disturbance because the old machine the memory changed so quickly that you couldn't transfer it to the new host in the past there actually was a talk about that on FOSTA last Sunday I guess the video recording is online and on YouTube by now but I haven't checked what else changed over the past few years the radiant driver improved quite a lot that's just like the AMD GPU driver I mentioned earlier the driver for lots of AMD radiant cards but for a few older generations so not only the latest generations but also those that got sold like two years ago some up-to-date current models are supported by this driver this driver improved quite a lot the 3D performance in this open source driver is now not up to par with the proprietary AMD driver but it's getting closer and closer and feature-wise it really gets closer because video acceleration for decoding and encoding works power management also got a lot of better doing audio over HDMI or DisplayPort improved it's not perfect not all of that works perfectly but the open source driver for the radiant cards really gets to a point where that's the driver you want and you don't want the proprietary driver because it doesn't give you it isn't better and actually in some parts it's even worse now already that's not only thanks to the kernel that's also thanks to the user-land drivers in MESA or LLVM so that were features that happened over the past few months now we're zooming back a little bit and one of them getting to a meta layer new versions of the kernel these days really come every nine weeks you can nearly bet on that sometimes it's one week less sometimes it's one or two weeks more but thanks to this development pace you can quickly see when the next kernel comes and know when features will arrive if they got merged another meta thing there's a new long-term kernel will now get chosen every January as I said earlier the 4.4 kernel released this January is a long-term kernel that will be supported for two years and the kernel that will be up to date next year in January that will then get chosen to be a long-term kernel so that's easier for those that run your own kernels to prepare because you know this kernel in January that will be one that gets two years of support another thing is that it seems like there are one or two security problems in the kernel these days because a lot of people look for security problems there and so one or two come up every week every week and among them a few times per year there are really important ones where you really should update your kernel within a few days if you really want to be secure or better as quick as possible and so better be prepared for that security bugs in the kernel will happen and you have to change now and then and keep an eye on it what also changed is more and more tools are used to find bugs or things or code where bugs could happen for example there are fuzzing tools that are now being used for three or four years now and then to find code that is not where security problems might show up or other problems and these days there's a new tool developed as this caller is also doing fuzzing support for that is not yet in the kernel and this thing is still developed but already it finds quite a few bugs I think there were about 30 bugs in one of the latest 30 changes in one of the latest stable kernels that were fixed after this caller found them there's also continuous integration tools now running there's a cable test robot that if you send a patch to some mailing list this robot watches that the robot grabs the patch and compiles it since if there are problems it sends a mail to the mailing list hey your patch doesn't work or it doesn't compile and so you get blamed for that actually in the open so better prepare your patches and check if everything works and there's actually the ARM guys from the now set up kernel CI.org where kernels are constantly tested and booted in QMU to see if everything works in the news everywhere there were a few voices that there were development problems like aging developers, lack of contributors, reuse stalls and especially the discussion tone between the kernel developers some of you might have heard of Sarah Sharpe or Matthew Garrett that said hey that's they are too it's not they complained how liners and some of the other sometimes talk to each other and said that has to improve to get new kernel in that's actually a quite complex topic that's why I don't want to go into that here any closer the short version is basically a lot of things are way better than 5 or 10 years ago actually I noticed when I wrote that that I'm actually looking at kernel development in 10 years now when 10 years ago it was really way more bad so a lot of things are bad already improved sure some things could be better but that's always the case and it's maybe it's good enough already depends I for one think in the past year nothing too much out of the ordinary happened liners said once said something really awful but he didn't say that to developer who was new on the list he said that to some developer he already knew for years so it's a bit like among friends the tone is a bit rough sometimes there on the kernel so that was part 2 I need to voice my voice again question the kernel supporting features in DTF that DTF did not generate high code for so it was kind of easy to get fixes in the kernel and complicated to get fixes in the other parts I've heard about it but I actually can't answer that for sure I think they wanted to make sure that it doesn't diverge too much but it's also not just like a stable ABI sometimes you need to update both sides to one version actually it's some code to compile eBPF codes in LLVM these days and if you're using it with Perf or something it's quite easy to compile that's what I know if you want to know more there's Daniel Brockman he's one of the main developers behind that he gave a presentation last week on Fostam about the eBPF and its features mostly from the networking side that might have some answers and you can email him so next up, wait a moment things in the works so what's coming in the next few months KDEBUS isn't coming I guess most of you have heard about that that was abandoned, what was KDEBUS I haven't heard about it was a kernel site replacement for the D-Bus daemon so the thing that passes the D-Bus messages around on your system and developers said we don't go that direction anymore and start from scratch again mostly and now they are working on making a message bus that is more universal so it can be used for D-Bus transfers but it also can be used for other things the thing is still under development remains to be seen when we see it for the first time but FSS also something I guess most of you have heard about it's a file system it was in 2008 or 2009 considered the next generation Linux default file system back then it was said it needs some time to develop but I guess nobody expected that in 2016 it's still not really finished and the thing is these days you don't hear so much about it and I think that's a good thing because developers are stabilizing ButterfS they find actually it's used in Facebook already because the main ButterfS developer works for Facebook these days and they found a few bugs there and fixed them and made some features that were not completely finished yet also worked on that so stabilizing is what is now done there a question that always comes up then is it ready yet and I think that's a question that's basically like asking is the what safe to go in if you think about it the answer always depends on the facilities and your local conditions because if you're going to the sea and there's rough winds and tides or something you might get dragged in even if you're a really good swimmer on the other hand if you're a really bad swimmer even in your local pond you can grab but as I said Facebook uses this SUSE also uses it by default on SUSE Linux Enterprise server and open SUSE also uses it I myself also use it but actually before traveling here I saw that ButterfS filled my disk already and I need to run a kind of defragmentation to get free space again to make ButterfS perform better so that's something you need to learn and so ButterfS is not that ready that you can simply use it I'd say for now it's something you want to use the features ButterfS offers and then learning that and dealing with those those kind of problems that are still there might be worth it but that's something you have to choose for yourself current live patching is something that was from the announced two years ago on this conference from the red head sides actually with different patches back then SUSE also did something to do live patches so to fix security bugs in your Linux kernel without rebooting and SUSE and red head merged their thoughts that's now in the kernel it's called as I said kernel live patching KLP the basic function got merged about a year ago I think and it allows to fix about 90% of the bugs that basically show up in the kernel the red head and SUSE solutions actually were able to fix 95% of the bugs that's something the KLP developers work on but there are some roadblocks that need to get solved if you're interested in details Google for compile stack validation that's something you need to the developers need to see where the currently running code is used and how to fix all the different places where it's running currently also things in the work is even more improvements for network performance there are so many things coming there actually if you're interested in that there's a talk about that on Sunday at 11.30 kernel network stack changes at 100 gigabit speeds because there are interrupts and create so much overhead and memory allocations the kernel needs to change to make that happening which ACLs is an improved mechanism for a test control list it's more possible to use set up a more consistent and more flexible permissions especially for ACLs especially for NFS servers and especially if you're running NFS v3 and v4 at the same time I'm running out of time that's because I'm getting quicker and quicker guest HSA for AMD I'm skipping and the cluster support maybe a few embedded developers are interested in this the kernel isn't ready for 2038 that's similar to the problems we had when we switched to year 2000 a few years ago it's only relevant for 32-bit applications the kernel needs to get changed and file system needs to get changed to make to store timestamps after year 2038 that's actually in the work and LWN recently wrote that might get finished this year I guess that's possible maybe it takes a bit longer and then it even takes a bit longer until userland is also migrated to handle everything but we are getting there some of the BSD actually fixed that already two years ago but it's easier for them for various reasons tarnification is also something that's a few kernel developers look at now it's like making the kernel a bit smaller and reducing the overhead to make linux suitable for Internet of Things devices there were also some roadblocks where people wanted to reduce the network stack to the really basic functionality and the network subsystem maintenance said no that was already years ago but over time people arranged and things it's possible to make the kernel smaller to make it more interesting for Internet of Things devices which actually become more powerful thanks to enhancement and processors so it might be not that much of a problem so linux might be used there quite a lot kernel hardening is also something where a few people invest more time these days they look at for example for security patch set which has a few security enhancements for the linux kernel and those developers look if they can get those features or some features like that into the kernel to make security to improve the security for the basic for the standard mainline kernel container is something where not that much happening is in the kernel but there are a few small changes to improve security and to realize new features cgroups for example is one of the areas where a few enhancements are needed so a little bit is happening there Android mainlining is also something that's now around for many years now where people try to get improvements that are part of the Android kernel get into the mainline kernel network socket destroy function I mentioned earlier is one of those features but there are still a lot of things in the Android kernel that are not yet part of the mainline kernel but slowly if things are getting better so maybe in the year or two it's really possible to run an Android device with a mainline kernel without any Android specific patches real time for those of you that have been here in the earlier talk there's also something where a lot of people are interested in to get deterministic behavior the situation one year ago looked a bit sad because the developers that are mainly driving that stuff forward to get the real time enhancement into the kernel needed funding that situation actually is mostly solved the Linux foundation started a coloration project last fall to get that running and these days there are at least five developers that are being paid to get the real time stuff into the kernel and actually thanks to that getting it into the KVM also and then you can do real time in VMs that was actually in the talk it was in the room right before this you can watch it on YouTube later if you're interested in that that was part three getting closer to the end question that question is actually asked in a in a few minutes I have a few post slides is there another question was it too quick I like your format okay actually you need to be a kernel developer and look what changes like if you're doing live patching if the data structure changes you can't do live patching for example actually Red Hat and SUSE offer as a service because if you want to do it yourself you have to be really skilled to do that so I guess I have a minute maybe two post slides if you want to know more details just google for them and there are more details about everything I talked in the web if you don't find anything just ask me one thing I normally do on this talks is telling people to help test in the kernel I'm skipping that because I'm doing a lightning talk today at six or something I don't know which room just look it up in the schedule I go a bit into that there the other thing I wanted to ask you is give me feedback I might do this again if you don't tell me how bad I was or if my English was that bad and tell the organizers also on the web page there's a feedback survey so otherwise that might invite me again and that's actually slide one hundred eighty five that's it if there are other questions simply grab me on the hallways or something enjoy the conference can I grab you one? actually was I going to talk about video? sometimes I'm not sure because it's so much I guess maybe I'm not okay thanks