 Hello, good afternoon, good evening, good morning, wherever you are. That's a good question, Will. So, I don't know, I just, it's just being me. So, I guess I always try to do what I thought is the right way to do it. And it allowed me to build up a small company and have my living. But avoiding the tech giants, yes, it was always on my plan to do so. So, in the early days during university, I had an internship at a large tech company and this was so bad experience that I decided to never go there. Predictions for the criminal? Oh, I don't know. This is hard. I mean, it's all going to go towards more accelerators, towards more utilization, towards more performance and whatever. But the use cases are getting brought up. So, it's not only the enterprise stuff getting more attention. It's all over the place and there's a lot of overlap in the application space. So, people have to work closer together to solve these problems. I saw the big tickets, big bits of the RT patch that. So, PrintK is still outstanding. So, the first portion of it just went into RC-1, 510 RC-1. That's the Ringmuffer E-Bright, which took a while because it affects tooling, like crash control analysis and stuff like that and they packed it with some other changes they wanted to do for the fulfillment. So, what's outstanding is, yes, most of the locking core stuff, but that's pretty self-contained. So, aside of Peter hating it, it should be not that contentious. So, the migrate disabled stuff is on the way, I think. And then, that's the big parts. I have a patch that I'm polishing now for the soft IRQ handling on RT, but that's not too bad. Exciting Linux kernel features, hardware features, I don't know. I mean, there's a lot of exciting hardware features or not so exciting hardware features on the horizon. People are trying to make it work. So, I don't see anything really outstanding. It's just building on top of what we have right now. So, I think, yes, it's the accelerator stuff. It's the multi-multi-multi-multi device stuff, which you can hand into containers and VMs and whatever, but it's nothing revolutionary. I don't think there is anything which will just turn the kernel around revolutionary in the near future. Maybe the Rust proposal was fun. It might make sense, at least on the driver side. Who will shape Linux in the future? Interesting question. I think there's still room for individuals. You just have to have the time to do so. So, I consider myself as an individual contributor. I mean, I'm backed by a company, but not by a big corporate like Google or Microsoft. So, there's room, but you have to have brocholars to fight against those tech giants. But at the very end, I think even if a lot of the tech giants will contribute most or a big chunk of the kernel, then they still have to agree with each other. So, the collaborative model and the consensus model will not go away. Iron is asking, how did it go with the RT Priority Inheritance Project? What project are you talking about? I think you are referring to the rework where people are trying to go away from Priority Inheritance and walk towards proxy execution in order to make deadline work. So, that's still work in progress, mostly by the folks at PISA University and some others. Michael, so, there is no requirement that the mainline drivers migrate to thread it, I accuse. So, we've just fourth-thread them. You can't fourth-thread them in mainline today. So, they don't care, mostly. So, we've just found a few things in the networking code where people make assumptions that this can't happen, but it happens. But that's rather small and minor issues. And it's not an RT problem at all. So, once you issue fourth-threading on the kernel command line, you run into the same problems without RT. There's a question whether we see more usage of Linux in automotive industry with preemptor T merged in the mainline. I don't think it's really depending on it being merged or not. It's going to be... They are using it anyway. And, of course, it makes it easier for them if it's mainline and if it's going the normal route instead of having an extra patch set or having an extra stable tree and things like that. So, it makes stuff easier, but it's not going to be more usage. I mean, there are tons of options, not. So, Thomas is asking what the issues with tasklets are. Tasklets are interesting. So, it's mostly the implementation which sucks. So, it's a way to do full arbitrary callbacks into software queue context, and it's semantically ill-defined. So, interestingly, a lot of the tasklets users shouldn't be there at all. So, it's just doing arbitrary stuff which could be done in a threaded interrupt handler as well. But it needs to be individually looked at for use case and it's not... You cannot do a wholesale replacement. That will take quite some time. Yes, I think that Linux can see disruption by smaller OSs from the bottom up, but this is not a surprise. Linux has grown out of... Oh, we run on everything from the smallest microcontroller to the largest server thing. So, the smaller OSs like Sapphire and others eat up from the bottom where Linux doesn't fit and can't fit anymore. So, that's a good question. So, yes, Linux, per se, is not a real-time operating system. But we have patches and they are gradually merged which turn it into a real-time system. But it's hard to claim it's really in the tradition of real-time operating systems which were designed for ground-up to support real-time. It's close but it's not mathematically provable. So, it's the best effort but with quite some success. And it's the most scalable real-time system on the planet which runs on launch machines. I haven't looked deeply into EVL. So, I can't even tell what it actually does. So, I have to really defer that question to a different point in time where I actually had time to look at it. No, I haven't written much Rust but I like the concept of Rust. Rust would help to avoid quite some of the problems C brings along, especially with people who are not aware of all the nasty shortcomings of C. So, it's mostly something to make it harder for driver writers to screw up. SIMD in Kernel Space. Interesting question. Without begin and end? I don't know. It's hard to tell because the main problem for that is that if you want to do that in all contexts including interrupts or software queues or whatever, then you have to push to save and push the SIMD state or whatever XSafe state onto the stack at every interrupt entry and that's going to be not cheap. So, that needs a lot more thought than just okay, let GCC auto vectorize code. It's not that trivial because you carefully have to weigh the performance impact of saving the register state on exception entry or interrupt entry versus what you gain in the code itself. I think that you have to try and figure out whether this actually works. I mean, technically it works. It's not the problem. But you have to carefully weigh whether that's something which actually brings a performance benefit versus the overhead of saving and restoring the register banks. Tricia, yes, my main project is to buy a goat farm. Unfortunately, I don't have enough time to follow through on that. There's way too much kernel stuff keeping me busy. But someday I might just throw the kernel stuff aside and go for the goat farm. Our team KVM VMs, I think they're still being done. Red Hat, I think does this a lot. Christian, no, I don't know about the page log fairness stuff. I haven't followed that. So, DP DK. I mean, DP DK and the raw sockets you can't compare that. I think what you really want to look at is the question whether you can use XTP bruises, DP DK, which is still leveraging the drivers in the kernel and the whole kernel facilities with all the good things, but has, avoids the overhead of crossing this is called boundaries and things like that. So, yeah, there's an advantage in using DP DK, but the DP DK based on XTP is what I would suggest you look into not or add DP DK, re-implementing a complete driver in user space. That's a total mess. So, Robert. Yes, first case and response time for a generic Linux kernel. Yes, it's maybe 500 milliseconds, whatever. I don't have numbers, recent numbers, but it used to be in that area. But with the preempt or TPatches, it's definitely not at 500 milliseconds. If that happens, that's simply a buck. I mean, we can't prove it mathematically, but we would see it in the massive testing, which is done out there. The only way I can think about is you use some weird out-of-tree driver or something like that or some really badly maintained driver. But you should see the problem already in testing with the proper debug options. So, that would tell you that driver is doing something wrong. But in the generic case, yes. I mean, it's hard to prove, but I don't think so. Kernel tineification, yes. That's an effort which is going on forever, but it's hard for them to catch up. I can't tell what the minimal requirements for a platform is at right now. I lost track of that, but it's definitely not the tiny microcolors microcontroller thing which has a very, very limited amount of RAM and things like that. So, there are investigations out there from other people who say what the minimal requirement is, but I don't know. Drew, one microsecond on the modern Intel AMD server. You're kidding. One microsecond is way less than the hardware-induced latencies can be. And that's something the Kernel has no control over. So, if you look at things like DMA, bus contention or whatever, it's extremely hard to guarantee a one microsecond on such a machine because they are not built for that, as a work case guarantee. I think it's an illusion, but say if you really need to handle that, you should have something which is less complex and there are a lot of approaches out there in the industry where you just use things like FPGAs or some side controller which does the real, real, real hard to achieve one microsecond thing and then you offload this part of the real-time computation to that accelerator or whatever you name it. I think that's more realistically than trying to say, oh yeah, we can do the one microsecond on a modern AMD server or Intel server. I think it doesn't make sense. So, Pankai, yes, it's hard for individual contributors to watch what in multiple subsystems is going on if you want to contribute something which overlaps these subsystems. So, it's hard, even for single subsystems, depending on their activity level, it can be hard to follow. It heavily depends on how you organize yourself and how much time you can spend to actually follow what's going on. There's no general rule for that. versus the patch series which is thrown over the fence every six months and then people disappear. Yes, I mean, if there's really value in the patch set, I mean, if there are fixes, you really should pull them out and just polish them up and apply them. If it's something which is actually making it easier for your subsystem to be maintained or gives you a better code structure or whatever, then it's worth as a maintainer to actually go there and do the dirty laundry. So, look at it and there's again no general answer. It depends on the quality of the patches or on the quality of the approach. I mean, if the approach is good, why would you throw it away?