 So, with that, we are going to slide two. And it's up on you. Microphone, please. Okay, I know you said no print K questions, but I can't help myself. So, is print K okay if you're not using a serial console? No. No, okay. No, I mean, kinder. Now, there was a lot of problems inside the print K core aside of using actual consoles. So, even if you implemented a dummy console and had this as the only console driver, it was still, the print K core was still not really nice on the real-time side. That's fixed. So, now we have to address the remaining points that we actually can use print K with a console driver in a safe way. But this is not necessarily an or T problem or T just a made it even more obvious what a huge pile of duct tape print K is. I mean, we experienced that with a lot of the facilities we were looking at over in that journey. I mean, it's all the same pattern all over the place, whether it was CPU hot plug. I mean, this is a print K is very similar patterns for both infrastructures. Everybody knew that they are broken. So, from a design perspective. And a lot of people were talking about it over time, hey, we need to fix that. We need to rewrite that and we'll put a proper design in place. But all that would ever happen is there was this pile of turd, then you put duct tape over it, then you add another layer of turd, put duct tape over it. It slowly composts into concrete, but it doesn't work. So at some point you really have to bite the bullet and rewrite it. Okay, since I've got the mic, I'm going to ask an unrelated to my first question. So I've been telling people that most of the stuff is upstream. I mean, there's little by my last estimation, I looked at the patch set. There's like 80 patches and they're kind of scattered around. Most of them are related to like, I think the majority of the patches are related to the serial 8250 driver. But it's only like about 4,000 lines of code. And so I've been telling people, hey, most of it's already upstream. You don't have to apply the patch. Is that correct? No. Okay. I'll stop telling people. No, it's not correct because you still can't enable RT on the mainline kernel because we need to get the missing print K bits and pieces into place before we can do that. And then the tail on the RT patch set, which is mostly stuff, some drivers, some other tiny bits and pieces, you can turn them off. That's easy enough. But the print K ports you need to have. And then we also let Linus to pull the enable it for x86 and arm64. That comes when the threaded print K infrastructure is actually in tree. So we're not far out, but it's, yeah, most of the problems are solved now. So at least that's what Peter Mladek and John tell me. I believe it once it hits upstream. I have a question about transparent huge pages. Currently, if you are preempt RT, it's just not compatible. You cannot... No, it's disabled in the config. Yeah. But kind of... Patches are welcome. So questions, if can we do something about it? Yes, fix the transparent huge pages code in order to work with RT, which we didn't tackle because people didn't request it. I mean, the question came up every now and then. But we are focused on getting the other things done instead of playing with PHP. So there's no technically reason why it can't work. It's just the current code base doesn't work. Yeah, it's also my understanding because really transparent huge pages will help to minimize TVB misses. Yeah, sure. But I mean, there's only so much time and so many things to do. So you have to prioritize things. And I don't care much about THP before I have RT completely enabled in upstream. Why would I waste time on that? I mean, the brain care problem is way more important than THP. And there's nothing which prevents other people to work on THP if they need it. I mean, we can look at it afterwards if nobody beats us to it. I'm happy not to look at THP code. I understand that you can avoid print K, but you cannot avoid THP. No, I can watch THP. I mean, there's tons of system out there which use preempt or T and avoid THP. If you're a particular application requires THP, then it's not my problem. OK, and another question. So either you send the patches where you ask some unnamed companies to provide the service to you. There's many ways how to achieve that. But you can argue in circles it's not going to happen just because. That's on the list of things to look at once the primary things are solved. But why would I work on THP if I can't enable upstream? Upstream first. That's the priority of the project. I understand. And another question about software queues. Currently, it's SCADASR default priority. Can we bump it? In RT, it's bumped already. If I remember correctly. If I have something, I looked. Soft interrupts are another piece of horror show. Yeah, the time of thread is RT. All software queue is SCADASR. But we won't change the defaults. I mean, there's user space which can tune the priorities. I mean, it might be the correct thing for your application. It might be totally wrong for a different one. Because soft interrupts are semantically ill-defined. There are context stealing. And they are not really controllable. That's one of the big issues. It's debating with, especially the network, people who are more than a decade. Yeah, exactly. Design is wrong. They defended it tooth and nail. And up to the point where they finally, a couple of months ago, admitted that they have reached the limits and they have to rethink. Yeah, it's exactly. NAPI is kind of broken in preempt RT because of its run in software queue. Yeah, but I mean, you're just complaining about facts. About facts which are well known for years. So why is nobody tackling this instead of complaining? I mean, we are working on things and trying to make it work as much as we can. But we have limited capacity. Just a short comment. You can put your NAPI into RT context. Yeah. With the software. Yeah. It was exactly my question. So it is probably answered. But I have interest in the actual state of networking. And mainly for me can control the network. So basically the network people are moving to a threaded NAPI right now. And that's also a lot of problem. It's not all problems because you still have the local, the local bottom of disablement in, in most of the places. But if they actually move completely over to threaded NAPI, then it allows to gradually remove the, the bottom of locking. You still have locks, but you spare the bottom of disable. I mean, but now a local bottom of disable is basically a big kernel lock. It's per CPU, but it's a big kernel lock because it's completely unspecified what, what it protects. It's like the big, the, the old big kernel lock, which just was, oh, it protects something, but we can tell. If you removed it, stuff broke, but you couldn't tell why. So this was a fun experiment, experience removing the big kernel lock because we actually figured it out for almost all places except one. And that's the TTY layer. Yes, it is another question from my side. Yeah. Because for a Linn network or Linn interface, it would be interesting to add some additional support into the generic TTY layer. Why are you interested in TTY? Or not TTY, but you are okay. Yeah. Because we want to do it ideally portable way to all your hardware which is available and only driver, which you have for your drivers are those which are used for the serial ports. Yeah. But if you go through TTY, you're toast. I mean, good luck fixing the TTY layer. I'm not even touching it with a 10-foot pole, even if you pay me money. Yeah. No. No, it doesn't work. It can't work. I mean, if you really have requirements on the real-time side for serial communication, then we have to come up with some other mechanism, but definitely not TTY. This is unfixable. Even with the local TTY discipline? I don't think that you can fix it. It's so deeply nested into each other so you can't even isolate the serial path. It doesn't work. Because it's TTY. I mean, TTY is a horrible spec in the first place and the implementation is even worse. Okay, people are using this stuff. But what we need is something like to send 16 bytes and read. It doesn't matter how much it needs to send. I mean, if this is the only route in co-path to that device, then you have a problem. And it is the only co-path to that device. So if we have a real requirement and a real use case for having a non-TTY-based access to certain devices, because what you're trying to do has absolutely nothing to do with TTY. Yes. Then we have to sit back and say, is there a way to actually come up with a solution which has a straight right to this device co-path? Yes, I have experience with maintaining out of TTY. You are a driver from 1.2 kernel, which is still in use. But it is then dependent exactly on single chip hardware, which is a problem. Yeah, but I mean, if that's a known problem since 1.2, I have to ask the question why people are still not having had the... or found the time to actually fix it and work with upstream to provide a secondary interface. I mean, if we have so many support for so many esoteric things in the kernel, if you come up with a reasonable good use case and a reasonable good argument why you can't use X and you need an alternative solution for that, then there's no reason not to... at least not to try and it hasn't been tried. So that's a good question because 1.2 was how many decades ago? Okay, but our protocol is niche and it's very special and nine-bit communication and so on. For this one, basically... Yeah, but the thing is, you noticed 30 years ago and you want to have access to all of your drivers and without having infrastructure, you won't get there. And I mean, waiting that somebody is going to fix the TTY layer for you, I mean, it's a good approach, but probably you wait post your retirement. Yes, for sure. At least post mine. Okay, I think that I contributed in other areas which get mainline, so this is the niche. Yeah, but I think it's doable, but it needs some thought and some work. Any other questions from the remote side? So far no questions on Zoom. No questions on Zoom. So what are we going to do? No more questions? Can we get i9-15 driver back soon? Okay, let's go. So the only way to get i9-15 driver back is to wait for the replacement driver which is currently in the works because i9-15 is unfixable. It has been coded to death. The patches we have in the real-time tree are extremely horrible. Merging them upstream, we might try, but not yet. Because we can try after we have the enablement upstream, but not before because that might set a bad sentiment. The problem is that some of the locking code paths in the i9-15 drivers are completely home-brewed and out of any rational locking scheme in the world. That's one of the reasons why people started to implement a new i9-15 driver stack because this old thing is you can't fix it by any reasonable effort. But the replacement driver stack is coming along. So I just have to wait for it. I was fearing that question to come up. Interesting enough, i9-15 fits perfectly into the same category of train racks in the kernel which we fix up or try to fix up over time. It was a masterpiece of duct tape engineering. If I can ask, do we have a list of things to avoid? Like you mentioned TTI layer, i9-15 driver, what else? With the hacky patches in the real-time patch set, the i9-15 driver works by some definition of works. It's not in the way. What you want to avoid is, in your real-time task, using the TTY layer, that's going to end up badly. If you do printf in your highest priority task, so be it. That's fine, you can. Whether that's a good idea or not depends. Probably it's not a good idea. But that's basic knowledge that has nothing to do with the TTY layer because printf is basically a write. Do we have other areas that should be avoided? Like maybe, I don't know, FATFS or BATFS or, I don't know? No, I mean the file systems are most... I haven't seen any file system related issue for many, many years. I was getting the ButterFS latency but they were in the 30 microseconds. It's a very long latency. Maybe, I haven't seen any of those or at least nobody talked to me about those. I mean, yes, there's always going to be something where people needlessly slap a pre-empty disable somewhere in the code and we can't prevent that. You have to observe it, you have to get to find it, to fix it. Tell the people, don't do it, please. I mean, that's the missing documentation part. I urgently need to find the cycles to complete the kernel developers guide to RT. So we can point the people to, hey, you should read this first. In the ButterFS example that I was showing on my presentation, the lock was made by Sebastian so you can ping him. Yeah, but it's a... No, it is a useful... But that's a regular lock. That's not a preempt problem. So I mean, yeah, file systems usually stay out of the way because they are not in a part of your real-time computations. You better not. I mean, if you write your log file from your real-time task, fine. You ask for it. If the disk stores, hey, you wait 30 seconds. Even if it's on something like a memory, the file system, like memdisk. Ah, that might work. But seriously, don't do it. No, I mean, the thing is, it just violates all the principles of real-time. And that's not a Linux-specific problem. It's a read documentation of any particular real-time operating system that we all tell you, don't use write, don't use read, don't use this, don't use that because they all say, read your data upfront or if you have to feed data continuously, then have some non-real-time thread which, and large enough, buffers so you can guarantee that your data is dear. If you need to write large amounts of log data, do it the other way around. Stick it in a ring buffer. Let somebody else take care of it. That's basics. You should have learned that at school. But doing it in a real-time context is definitely wrong. Independent of Linux. Microphone, please. Yeah, but there are real-use cases where you have data streaming info from, for example, cameras or radar sliders in automotive world. Say you have, I don't know, half a gigabit per second streaming in real-time and you need to handle it in real-time. I mean, that's a system design problem, really. Because you can do it and people actually do it. I mean, depending on how you set up your system, you need a dedicated network queue for that. You do not want to have other unrelated traffic on the same interface than it actually works. But that's a system design problem. There's no general recipe how to make that work. I mean, if you have one network queue where you do get your camera stream in and on the same network queue you have megabytes of HTTP traffic, you lost. No, it's of course. But even though if I have separate network, I have problems with network stack. That is not very friendly to real-time, starting with NAPI. Yeah, it's not very friendly to real-time. Actually, you can tune it to a point where it works reasonably good. I mean, people do it out here. There's definitely a lot of things which can be improved. But again, this is extremely use case dependent. There's another call-type dependent which path through the network stack you're taking. People use XDP in order to avoid some of the issues. There's so many options. There's no general recipe to address that. I mean, XDP zero copy removes a lot of the network stack issues. So I've got a question that's maybe tangential to RT, but maybe not. So you guys are Intel employees or how does that work? No. So what happened? Linutronics are still Linutronics employees. Well, no, but I mean, so I read an article that Linutronics was acquired by Intel. Yeah, so it's just the stockholders changed. Oh, stockholder changed. Okay. So I was just wondering, is Intel funding some work on RT? That's a customer, so I don't know if you could comment on that. Intel always funded RT work via the RT projects. Yeah. Yeah. I mean, Intel has as much interest as ARM to make this work. Yeah, Intel and ARM and TI and NI and a variety of others have been, and Red Hat, of course, of course, have been strong supporters. Yes, Seaman, thank you. Anyhow, we've had, like they've been a tremendous resource for making this actually possible with all the preempt RT upstream. It's been a long journey and they've been with us through it and very much appreciate the sponsors who've been making this work. One. For it to start on LKML. Yes. Yes. So for me personally, it started in end of 1999. So next year it's going to be 25. Yeah. It's a long journey and there's a lot of things we need to address and to improve over time, but there's only so much capacity. Yeah, I tried this trick of working day and night, but it doesn't work out. It doesn't make things more effective. And then you have to tell all the other people to stop their stupid, adding stupid things to the kernel. That might be helpful too. Intel the order of people not to invent new problems. So yeah, no, it's, we're going and tackle things as they come along. Last question. I think we are over the time already. Okay. So as far as I understand, cyclic test is known. Good return application. Is there another application that can be used? No, I mean, cyclic test has a purpose, but it doesn't, it doesn't resemble any particular real world application. Cyclic test has a purpose to test certain aspects of the system. Do we have an example application that is known good? Everybody's application, but the problem is that the, there's the variety in the requirements for the particular workloads or so. Why it is there like good example. Somewhere. No, I don't know. No. I mean, cyclic test is a reference implementation and which tests something. There are test suites which test other aspects, but what, what's the, I mean, your real time applications, what they vary in complexity, they vary in requirements. Very simple ones, purely periodic driven and very complex ones, which are multi-threaded, depend on network input and whatever. Okay. We can start with some simple example. So we have tried to do that for our students. So it would be something like motion control with simple PID running a real time task processing the hardware through the M map interface. Whatever if it can come up with a synthetic version of that, which actually does not require special hardware. Fine. Yeah. So I mean, that's the whole problem of tests and benchmarks and whatever is that you need to be able to run it. Basically without hardware restrictions. Otherwise they are useless if they just work on one particular system in the, on the planet, then exactly one person on the planet will use it. It all would be to have open source FPGA and to be able to run it on it. Everybody can. Yeah, sure. But who has the open source FPGA? Intel can. We are back to exactly that. Intel can. Yeah. The problem is if you, if you want to have something which can be, can be tested widespread and also integrated into continuous integration robots and stuff like that, depending on special hardware is really a bad idea. I mean, unless you can provide the, the CI infrastructure with enough hardware, you need to integrate it and run it for you. But unless that happens, these tests or yeah, you can run it and report back. And there's infrastructure for the CI tests. When you have your special CI infrastructure based on your special hardware based on your test application, you can actually feed back the, the information feedback regressions and stuff. That's, that makes a lot of sense in, in, in, in the context of this is my special reference thing. But other than that, I mean, it's cool if it's open source, but it's not helpful if, if I can't run it because debugging it is then impossible if I don't have access to the, to the hardware. So you have to have, provide the debugging to that's the downside. Okay. So I'm not longer in the way between you and the board. I'm sorry. Thank you.