 Good morning everybody. Thanks for showing up. I think this is actually the biggest stage I ever got for presentation much appreciated So let me let me quickly introduce myself My name is Jan Eidenberg. I'm a trainer and project manager at linotronics, which is a Service provider for doing embedded linux development. Well, you yeah, you might have heard of us So we do a lot of mainline development. For example, we we've got the x86 maintainer at linotronics We we do a lot of different Mainline development and actually one thing we do is on behalf of linux foundation We're trying to get the real-time capabilities of linux into mainline and that's basically The topic of my talk today. I would like to give you a brief introduction on using real-time systems with linux or how to make linux real-time so This is not gonna be a very technical talk This is basically a overview of what you could do what's possible what not and giving you some historical details about real-time in linux, so I Will talk about basically About the definition of real-time so first of all before I start to look into linux and real-time We need to get an idea basically. What's the definition of real-time and who needs real-time systems? This is how we we're gonna start it today Well, then we will look into linux and real-time you will see we have several approaches on on making linux real-time We have several different approaches like we have real-time systems with linux around for more than 15 years right now So we have a lot of different historical approaches on how to make linux real-time So I will show you the most important ones and then the ones which are still well developed today in which I used in the industry We will also look into Yeah, the results and the latencies you could basically achieve with a real-time linux on a specific platform So basically We will look into a cortex a9 dual core system and see which latencies could be achieved with different real-time linux approaches So this is basically what I'm gonna talk today about So now let's get started Now let's quickly rethink what real-time is about and and for me this this definition is pretty important because still when we do Presentations or trainings on this we can see that there's still a lot of confusion about the definition of real-time So basically one definition we get really often is well real-time is it's just about fast responsiveness It's fast execution Well, I don't know is anyone in here who would agree with that No one yeah your experts already So basically yeah, this is not the definition of real-time right, but but you get this definition very often real-time is about fast execution it is not Well another definition we get quite often is well, it's about performance Which is also not true right, but well We have to admit that World real-time might be a bit confusing Also, it's not fast execution. It's not performance But well if we would go outside the hotel and ask people on the road what's real-time they would tell us well Maybe it's the real-time Just the time of the day Basically, well we have a clock real-time in Linux right, but that's the time of the day But this is also not what we are going to talk about today So real-time is kind of confusing so that's this is why it's that important to get a clear definition Of what we're going to talk about today and and to get it in the correct context with Linux So it's not about it's not about fast responsiveness. It's not about performance Basically, it's all about Determinism so when we talk about real-time we talk about timing guarantees So you need to execute something in a specific time frame and you need a guarantee for that So basically the definition would not be as fast as possible It would be as fast as specified and this is the thing which is important about real-time So well if 30 seconds is enough for you. Well, it's okay It's your definition of real-time, but you need a guarantee to meet this So this is really important to get this point when we talk about real-time. It's all about determinism So when I talk about Linux and real-time today, I'm gonna talk about Deterministic timing behavior with Linux This is basically the idea of this talk So well, could you help me out here? With a second Let's try that again Just let me know if it's making trouble again So well, yeah, sorry for that Okay, thanks. Thanks for telling me So once again, we've been talking about determinism so Now when we start about talking about real-time Linux the next question would be who needs real-time Linux so Basically when we talk about real-time We have to think that the correctness of your algorithm you're trying to to calculate It's not just based on the correctness of your algorithm. It's also based on the correct execution time Which means if you don't do The calculation within the correct time frame This will lead to an error condition So this error condition in the real-world example might be The product you're trying to manufacture is broken Your machine will break or something so missing the time slot will lead to some error condition And actually the worst case situation would be if you miss the time frame actually someone would get hurt, right? So if you get into that situation that the machine gets broken or your product or someone gets hurt This is basically the moment where you need to think about real-time requirements So Now one last point on the definition of real-time and This is something we here really often when it comes to Linux and real-time We get some word like soft real-time so I Don't want to get Really deep into that topic, but please please forget about this word So we don't talk about soft real-time. Let me give an example like if your wife would tell you she's kind of pregnant Would you believe that no you wouldn't right so well she can be pregnant or not so same for real-time Let me give you another example like if you think about the use case and missing the time slot will lead to an error condition So basically I'm interested in not getting hurt Well not getting hurt for most of the times wouldn't be acceptable for me So basically that's the same for real-time. So you can be deterministic or not, but there's nothing in between, right? So please forget about this soft real-time word. It's it's just crap, right? So we can be real-time or not. So When I'm talking about real-time today, we talk about hard real-time and deterministic timing behavior. That's it so well when you get Into a new technology The first thing you do is you're trying to check out who else is using this at this very moment So basically you don't want to be the first one, right? So well for real-time Linux, we actually we have many users around in this world Which are currently already using real-time Linux systems. So we have a lot of industrial Applications we have the automation industry Actually already automotive industry we've got multimedia systems. We even get well aerospace systems in non-safety critical applications And we actually we have financial services, which is pretty strange, but They run it on big server machines for example for high-speed trading and stuff So they need a real-time operating systems and these guys are actually using Real-time Linux. So we already have a lot of users of this system around so When it comes to real-time and the real-time operating system We need to check a couple of requirements for this. So we need Deterministic timing behavior And to get deterministic timing behavior One of the most important features operating system needs is preemption Well, basically you need to be preemptible at Most of the parts in the operating system because the high priority task needs always to be able to to preempt the low priority task, right? So this is the most important requirement for a real-time operating system And well, you also have a couple of special situations you need to deal with One of these is the so-called priority inversion so some of you might have attended the Real-time summit yesterday. They had a couple of discussions on this So in real-time systems we have This classic error scenario. So Basically what could happen is Just think about static priorities We've got three tasks Task one is the highest priority task Task three is the lowest priority one So basically now these two tasks need to share a resource. So we need some kind of synchronization So what would happen right here is task one is blocking because task three is holding that resource Which is basically not a problem at all But now what could happen is task three gets interrupted by something in between Before it can release the lock So the result would be A high priority task is waiting for a low priority task and the lock never gets released until task two stops running So this is a clerical classical error situation in real-time systems And actually a real-time system needs to deal with that So we have a couple of strategies and actually linux can already deal with this situation You don't even need a real-time extension What linux does is in this situation We've got the so-called priority inheritance Which is basically to to keep it very simple In this situation what we would do is we would boost task three to the priority of task one Until the point it releases the lock so we can avoid task three to to get interrupted by someone So it's not just the preemption. We have to deal with a lot of different use cases Which are very specific to to real-time applications And we need to think how we can get these features in a general purpose operating system like linux so Traditionally We have two approaches To make linux real-time capable And uh, well the oldest approaches are the so-called dual kernel approaches which are basically not Doing real-time in linux. It's like having real-time in linux on the same system So these guys just introduce a micro kernel Which is doing the real-time And well the other idea is just to find a way To make linux itself real-time capable so Let me try to explain that so this is the classical dual kernel or micro kernel approach So what these people do is They do a simple real-time kernel. So this part is just Well doing the real-time stuff And the idea is that linux is just running on top of this micro kernel as a low priority real-time task So if you don't have any critical real-time applications running Well linux can get some run time So this is the basic idea of these micro kernel approaches which Looks quite clever at the first glance Um, but well if you think a bit More about this approach you've you've got two problems you have to to solve, right because Someone needs to maintain that micro kernel, right and needs to port it to new hardware So just think about the amount of arm socs which which are coming up right now So this is exploding. So someone needs to maintain a micro kernel needs to port it to new hardware This is a huge effort and well, basically these communities are not as big as the Community of the linux kernel. So this is basically something you have to deal with in this situation Well, and we've got another point right here Well, you can see linux is run not running on the on the physical hardware, right? So you need some kind of hardware abstraction layer Inside linux just to make it capable to to run on the micro kernel So basically you have two things to maintain You need to maintain a micro kernel and you need to maintain a hardware abstraction layer to adjust it to the most recent linux versions So also this is a big effort and basically this is one reason why most of these approach Approaches are usually a step behind of the linux development because they just can't keep up with a fast linux development but well these were the First approaches we had for real-time in linux and and these approaches are still around them I'm going to give you two examples on these implementations so The other idea would be instead of moving the work to the micro kernel finding a way to make linux itself real-time capable, which basically Looks like a huge task Well, basically that means you would have to touch every single file in the kernel And I can tell you we had to But basically you can manage that so at the first glance looks like a big big shot But at the end of the day the result you have right here is Well, it's just you have to real-time in linux So you have no microcone to maintain You have no hardware abstraction layer to maintain And you you can just use the standard tools. You can just work with linux So basically if you would have the choice to make linux real-time capable That would be the way you you would want to go So, well, let's have a look into some some Real implementations of real-time linux So one of the very first approaches we had for linux and real-time was rtai The so-called real-time application interface This comes from Italy from the University of Milano Actually, I think actually they used it for a couple of aerospace Applications and this was actually the first real-time linux. I was using right 15 years ago or something. So This this approach is around for years. So rtai is a classical dual kernel approach. So basically This this kind of implementation So Rtai had a couple of drawbacks when when using it. So basically when you when you write So this is actually still around and maintained, right? But the basic idea when using rtai is You write a kernel module you write a kernel module which is basically scheduled by this micro-curl but basically Well, it is like kernel development. So it's Not like writing an application. So it's really hard to deal with Hard to debug hard to get into it and basically uh one one thing we I had to face I had to do this for an industrial customer Well, since it's most of the parts are kernel space. It's gpl code Which is not a big deal if you do open source software But well for an industrial customer, you usually also have some closed source parts So you always had to think about which parts you can put in user land without real time and which parts you can put Into the kernel with real-time approaches. So Basically doing real time with rtai was pretty hard because most of the stuff is done in kernel space um, they had some interface for doing real-time In parallel to user space tasks This was called lxrt Well, I tried it several times. So for me it was not Working pretty well. So I got told to get more stable these days I have to admit I haven't tried it for a long time, but for me never worked pretty well. So basically Usually for rtai you you have to stick with the kernel space um Another problem we had with rtai was one of the design goals Is basically not portability. It's more or less for a given number of hardware To achieve the lowest latencies you could possibly get That this was basically one of the rtai design goals. So This means that the number of supported platforms it's Well, it's it's quite limited. Yeah, it's it's basically x86 32 bit 64 bit a couple of arm platforms and that's it So if we look into the rtai design um Basically looks like this and this is the classical dual kernel approach And basically you you can identify that the two problems I mentioned about micro curls You've got to maintain this real-time kernel and actually you can see This real-time kernel is just handling everything which is Related to the timing critical things in the system. So you have to maintain this real-time kernel and right here You have to maintain this hardware abstraction layer So these are the two problems I told you about so this is a classical example And linux is just completely in a different world The only thing you've got is a simple communication mechanism to synchronize both worlds But that's basically the idea and you can Nicely see in this figure that basically it's not real-time in linux. It's just real-time in linux in parallel on the same system That's basically what you do with these approaches Well, there's another approach which is called senomai and this is Still pretty popular. So this is still used in a lot of industrial system. It's still around on the market It's also quite old. So I think it was founded in 2001. So it's around about 15 years or in 2000 Like it's it's more than 15 years now. Well, um, the reason why it's still that popular is that these guys Had one advantage. They always had a proper solution to have real-time in user space So you didn't have to do everything in the curl So userland is much easier to deal with So this is basically one thing these guys made pretty good and they had a nice idea for real-time in user space they implemented the so-called skins which is basically a emulation layer for the api of different real-time operating systems So you have a for example a psos skin. You even have a posix skin So if you have a pre existing code base for some operating system You can reuse it with a tsunami Which is pretty nice. So basically these layers are not feature complete. It's it's a subset usually even the posix layers Just a subset of posix, but well, you can actually reuse a pre-existing code base for rapid prototyping porting your application very fast to send them I so that was basically a nice idea um They have a lot of supported platforms way more than rta it is maybe also reason why it's More widely used in the field than rta. I but It's still A dual kernel approach. So also these guys are using a microcone for doing the real time. So Basically, we we suffer the same problems. We've got the microcone to maintain we've got a hardware abstraction layer to maintain and Well, even if you can have real-time in user space There's one additional problem. You can't use the standard c library You need special tools and special libraries to deal with senomai so Even for posix you need to link your application against the senomai posix skin So it's not standard glibc. You always need special tools and special libraries To deal with these microcone approaches. So the handling is way more more complicated So if we look into the structure of senomai Basically looks like this. It's actually not that different from rta. I you can see it's a classical dual kernel approach we've got that Microcone which is dealing with the hardware Right here You've got the real-time tasks running on top of that microcone But you can see a small difference You can see that part which is called nucleus. You've got and you can see that code path, right? So we get the real-time into user space Which is much easier to handle than the kernel part So this is a big advantage but to use this once again You can't use any standard tools of linux. You need to use special libraries and stuff So it's much easier, but still some special tools are involved So Let me summarize The dual kernel topic for you So well the dual kernel topics the dual kernel approaches have been the first ones We had around on linux for a couple of reasons because when when these approaches have been invented We didn't have preemption and stuff in the kernel. So Implementing real-time was pretty hard. So basically this was the obvious approach These approaches are still around on the market But we have a couple of drawbacks. So we've got a special api special libraries. You have to link against so This is basically one one drawback. We have Well special tools and libraries We need to maintain a microcone and the hardware abstraction layer So basically you usually step behind of the linux development So this is also some big issue with these approaches and last but not least just remember We also have high-end uses like Financial services and stuff for a little time linux. So these guys are not running real-time on a two or four core machine arm These guys are running on a s390 with I don't know how many cpu so For these guys we have with these microcone approaches We have a bad scaling problem because basically these microcones are known to to scale pretty bad on big systems So they usually scale up to eight or 16 cpus But if you get more cpus, they actually don't scale pretty well So we have one approach to make linux real-time but Basically these these approaches have a couple of issues So for that reason People started to rethink if there just wouldn't be a way To make linux itself real-time capable And well actually we have now one approach Which is called preempt rt or the so-called real-time preemption patch So the reason for this name is what we basically do is we just introduce a new preemption model to linux Which is called real-time or full preemption So this is why it's called a real-time preemption patch This is a classical single kernel or in kernel approach making linux itself real-time capable And well, basically this approach is now around For 12 years. So it's also quite old. Um, it's it's widely used in the field It was basically founded by thomas glijksner and ingo mollner Um, these two guys in 2004 started a lot of real-time linux development They started to coordinate their work and the result was the real-time preemption patch Well, we have a huge community for this development right now So basically the reason for this huge community is that actually Most of the parts of this development are right now already included in the linux curl So 80 percent of their development already made it into linux So that's the reason why they have a huge community So basically if you're using linux nowadays and look into The timers the interrupt handling tracing infrastructure Priority inheritance and all that stuff all these features were basically originally developed for the real-time preemption patch And we're pushed back into mainline So this is the reason why we have so many users for this approach um Basically Since this approach makes linux real-time itself. You don't need anything special to know about it It's just if you write a real-time application You just do a simple posix application Which means you can buy a 30 year old os book like About posix real-time programming and basically you know how to to write a real-time application for preempt rt You don't need any special libraries or stuff. You just use your standard c library So this is the big advantage of making linux itself real-time capable So um, we also have a high acceptance in the community for the real-time preemption patch So what's the reason for this? So basically as I told you we have a lot of features Which came from the real-time development into the mainline development? So basically what we did we added some powerful features Which are not just useful for the real-time users, which are useful for everyone So basically these guys made linux a better operating system also for all the other users So this fact got them a lot of acceptance In the community and the other reason is that well when they made linux real-time everything got preemptible So we found a lot of race conditions and locking problems fixed these and pushed them back into mainline So actually we improved the stability of linux a lot and this is also one reason why these guys have a high acceptance in the community So they're not just not just focused on their work. They just get some more quality into linux So basically the mainline integration of real-time started a long time ago And actually I just check this these days the mainline integration started in 2006 So this is now actually 10 years ago We started to push the first features into mainline So this is a original statement from linux torwalds from colon summit in 2006 so The reason for this statement was Usually if you want to get a new feature into linux He wants to know a use case for this so Who the heck is using real-time on linux? So actually One developer was telling him. Well, I've got a customer who's running welding lasers with linux already with a real-time extension so Well, that was the answer of linux like well Doing a welding laser with linux is kind of crazy, but well We are all kind of crazy. So i'm fine with you doing the real-time stuff So sounds funny But basically that was the starting point of the mainline integration of the of the real-time linux development round about 10 years ago Nowadays you you still need a patch to have the real-time stuff. It's like 20 percent of the original size we got most of the stuff in the mainline kernel right now You can get this patch for every second kernel version And currently the linux foundation is pushing the mainline integration So the main goal is like having the real-time capabilities within linux within the next two years So this is basically we're doing that development on behalf of linux foundation So this is basically the idea so How did these guys manage to make linux real-time? So we need to remember again What's the main requirement for real-time operating system? And the main requirement for real-time operating system was preemption To achieve deterministic dehyming behavior. We need preemption. So these guys Had to find a way to maximize the preemptible sections inside the linux kernel so and basically Well, this is a very simple Explanation, but this is basically what they did First of all, they had to rework the locking primitives To well minimize the atomic sections inside the linux kernel and to maximize the preemptible sections inside the linux kernel So basically what they did was reworking the spin locks And finding another method to deal with these situations One idea Yeah, just basically reworking the locking infrastructure Basically at some point the real-time preemption patch was also called the sleeping spin locks patch That was the reason because we had to rework all the locking primitives Well, and they had another pretty cool idea Another section in the kernel where you are not preemptible is basically the interrupt context So what these guys did was instead of running the interrupt handlers in hard interrupt context They moved it to a kernel thread So what you basically do is when an interrupt arrives You don't run the interrupt handler code. You just wake up the corresponding kernel thread and the kernel thread will run your interrupt handler Which is two advantages basically The kernel thread is interruptible main advantage And well, it shows up in the process list with a pid So you can even put a priority on your on your interrupt which is arriving So well, you can give a low priority on your non important interrupts and give a higher priority on your important userland task So running it into a kernel thread was actually pretty nice and pretty cool idea And actually this is also one of the features The threaded interrupt handlers which are already available In linux mainline So even in mainline linux nowadays you can decide if an interrupt handler can run in hard interrupt context or in threaded context So actually also one of the features which came from preempt rt so well, this is Just last but not least a quick overview how the real-time preemption patch this single kernel approach is working So basically well, you can't see any difference to standard linux. It's just standard linux Right just with a new preemption model. So there's nothing special about it. So if you do a kernel driver It's just a linux driver. If you do a real-time application It's just a linux application. So you don't need any special libraries or special api You just do a linux driver or posix application That's it so now i've promised that i will show you some Results in a real-world example, which Latencies you could possibly achieve Um with these real-time linux extensions So what we did was we took a simple onboard With a altera cyclone 5 Actually, we took the rocket board which is a community evaluation kit of that soc And we did a couple of latency measurements and what we did was we did the same measurements for senomai Like a dual kernel approach and for preempt rt a single kernel approach So um, well, there was actually a special reason why I wanted to do these measurements because um Well, if you google for these topics You will find a lot of papers comparing senomai preempt rt and rta i And basically most of these papers claim that the dual kernels are way faster when it comes to latencies So and actually for most of these papers. Well, I was able to figure out that Yeah, the preempt rt systems are most of the times pretty misconfigured. So My idea was to get a fair comparison of dual kernel and single kernel approaches and see which latencies could be achieved So we actually took one senomai expert and one preempt rt expert Gave them the same bird gave them a task and then just tried to compare the results. That's basically what we did So I think that was pretty fair, right? So, um, but yeah, let's wait for the results So, um, what we did was, um, we did iacq tests So we fired iacq with a frequency of 10 kilohertz We took the os adl latency box to do that I will tell you how this box is working later on and we put the system in a worst-case scenario So basically since you want to guarantee timing behavior, you need to guarantee this Well, even in a worst-case situation, which is usually high cpu load high memory load And a lot of work for the scheduler Well, we achieved that with a tool which is called hackbench So this is basically how this tool works You've got process groups With 20 clients and 20 servers and the idea is that each client sends 100 Messages via socket to each server. So you've got a lot of inter-process communication And you can tell hackbench how many groups to run So for example, if you take 40 groups is 40 times 40 processes So you have 1,600 processes doing weird inter-process communication So basically this is a real worst-case situation because yeah, the scheduler continuously needs to take decisions You have high cpu load. You have high memory consumption. So this is basically the real worst-case situation for your system And while doing this while running this we just started to Fire interrupts and measure the latency. So this is basically what we measured like Once again, I'm taking myself as an example like we get an external event Like this morning the alarm clock was ringing I need to react to that and get out of the bed Basically, this is what we what we did on the embedded board We just fired an external event and did a simple driver who who needs to do this reaction And well, what you can do with the latency box is this latency box can Record all the samples you get and do a nice histogram for you because this is also quite interesting Just not to get the worst-case latency also the Variety of latencies you could get. Well, basically once again taking me as an example today. It was quite easy But well, I might go drinking tonight. So tomorrow it might be quite hard to get out of bed So it's not always the same latency. So you have some Variants in it, right? So this is the so-called shitter And that that's why we want to record all the samples and put them into a nice graphic And this is basically what the os adl latency box can do for you So this box can fire interrupts in a preconfigured frequency. So in our case, there was 10 kilohertz So it has an output pin Triggering a signal you send this signal to your Test system your test system needs to trigger a pin Which is connected to an input pin on the latency box and the latency box will just measure the time in between and record all the samples so We did this for 12 hours, which is Not very long, but it's the bare minimum to get a good idea of the real-time behavior of the system So this is really the bare minimum. You could take it's not really long, but it can give you a good idea So this is basically the measurement we did firing interrupts with 10 kilohertz Getting a reaction from the target system and measure the reaction time Do every measurement has been taken with the latency box So actually what we did was we had two use cases And the most important one was the reaction time of a user space task And this is basically the real world example If you need to react to an external event, basically you need to synchronize Your user space task with some interruptor stuff like some field bus interrupt or whatever But basically you want to do your work your homework your real calculations in user space So this is basically the most important example So that was the reason why we took this as the first measurement Measuring the latency from the external event arriving into your application And and triggering the GPIO which goes to the latency box again So well We started this these measurements on senomai These are basically the results So as you can see we did a nice histogram. Well, you can see the latencies on this axis You can see the number of samples right here so Now The average was like 30 microseconds reaction time But basically the average is not the number we are actually interested in right So we want timing guarantees. So if you want to to look into timing guarantees We need to check for the worst case Well, you can see the worst case latency right here the small peak So we had actually one sample which was around about 90 95 microseconds That was the worst case latency we got on that device with the 10 kilohertz interrupt So let's say 95 microseconds worst case latency, which is actually not too bad For user space application So now we did the same measurements On preempt RT You can see Well, the variety so the chitter is a bit bit bigger. Yeah right here But you can actually see this is the worst case latency we got which is Pretty much the same we had for the dual kernel like it's even slightly better But I would say it's pretty much the same bit less than 100 microseconds what we could achieve So well, basically this can prove that we don't have that big difference You can find in a lot of measurements Which are around in the internet between dual kernel and single kernel approaches So in this test case we had pretty much the same latencies on the dual kernel and single kernel approach And well actually one thing we tried right here is well since Preempt RT is a standard linux system. So we can actually use All the standard linux features So we were wondering What would happen if we just isolate one CPU that was a dual core system What would happen if you start isolating CPUs for just handling that interrupt and see what happens So basically that was the next measurement we take we've taken Doing the same measurement with an isolated CPU. So you can see it gets slightly better like from 90 to 80 microseconds But this is basically well, you don't hit a specific worst case because it's running on a different CPU Well, this is done pretty often in real-time systems that you just isolate a specific core for For some specific tasks. So well since there was a standard linux use case It was quite easy to isolate this core. So we've got this comparison I think the most important Graphs are these two right here the green one and the red one the red one is just the preempt RT without any special configuration and the green one is senomai Without any special optimizations, you can see basically the latency we got for the user space task was pretty much the same Which is actually pretty cool when we compare a dual kernel and single kernel approach so We did another measurement And well, actually I was not really happy with that Measurement. Well, you might get that if you look into that slide, right? What we also did was we measured the kernel latency And what was the reason why I was not happy with that measurement because Basically in my opinion, what what you basically do you compare a both Full-featured linux system with the reaction time of a micro call um Well, you can discuss about that, but in my opinion, it's not a fair measurement because yeah, we compare linux in the micro call So basically in my opinion we compared apples with peers But basically, um, the results were actually quite interesting So I would like to show the results right now because, um This is the result for senomai and well, if you if you look in the internet When we we got a couple of measurements in the internet where just the dual kernel approaches were 300 or 400 percent faster than a single kernel approach But basically, yeah I was expecting some something similar right here. So we got um 30 microseconds for senomai Well, and if we check out what we achieved with preempt RT without any optimization was like Well, it's slightly better than the the user's case base latency But well, we we need to remember we just go into a kernel thread Right here. So it's different. It's not that big But now if we start to optimize this use case and just once again start isolating the core We even get down to 60 microseconds Latency for the for the preempt RT, which is basically if you compare a micro call And well a full featured linux system. I think this comparison is not too bad 60 microseconds is actually Pretty fast on that kind of cpu. So the difference is not that big as you would expect And not that big as a couple of measurements in the internet claim to So actually we're getting pretty close with the single kernel approaches So now we did one last measurement On the on the kernel agencies Well, and and basically the idea was we were running on an ARM platform And well, I was saying well we compare Apple with peers But well if you go with the micro kernel, you might be willing to to accept a couple of limitations on your system So I was thinking about Could we also put some limitations on the preempt RT system and see if that could somehow improve the latencies Well, the the idea I had was just I was just curious how that would improve the timing behavior Just trying to use the fiq feature of the arm cpus Which is a hardware feature. It's a special interrupt vector on ARM Which is running in its own context. So it's very limited in the things you could possibly do. It's living in its own world So it's very limited what you can do, but well when I'm willing to to accept Limitations basically this this might be an option. So what we figured out was well, we've got 10 microseconds Or even less Average latency and you can see the worst case is all around about 30 microseconds And that was actually a pretty interesting number because If you compare this it's also 30 microseconds. So We are quite sure we just hit a hardware latency in this measurement But we didn't have to the time to track this down right now But we've seen this on similar architectures with cortex a9 cpus. We see the similar latency. So Really looks like a hardware thing so well This is the final comparison of the kernel site. So taking into account that we compare a Full-featured operating system with a micro kernel. Actually these numbers are not too bad And well, we need to remember and that's that's the conclusion With the pre-empt rt patch we have a single kernel approach Which is basically nowadays the defect of standard for linux and and it's getting integrated in the mainline linux development Um, it's pretty simple to use you can use standard c library knows nothing special It's available for every second kernel version. So it's the standard. It's easy to use and it's Basically for the most of the use cases It the micro kernels don't have better latencies. So basically the latencies you can get Really comparable So basically if I should give you a recommendation, you know Um Pre-empt rt would be the way Because it's the standard and the latencies are pretty good. You could possibly get And last but not least. Well, if you're willing to take limitations Instead of going with the microcone solution, you could also check out Which features your hardware is offering like the fiqs on arm You can use that we've we've got support for that in linux But you need to be aware of that the things you can do with these hardware features are actually pretty limited So well There was basically all I wanted to tell about real-time linux. So it was pretty high level I know but hopefully it was quite useful for you to get an overview about real-time linux So well, basically, um, thanks for listening and well feel free to ask any questions Any questions we've got one minute left. Uh, is there a mic? No Yeah, yeah, it was yeah. Yeah Yeah, that's true. Yeah, basically yeah that the what we basically did was pretty simple Just we booted with a isolate cpu. So we just got the housekeeping stuff on on the real-time cpu And all the rest was just running on the second cpu. So basically it's half the load, right? Yeah, that's true Any other questions? Yeah Yeah Well, it it somehow is I but I agree with your point This is basically a problem we have nowadays. So basically, I think your point is this With these measurements, we don't have the mathematical proof of that We we we can't miss things But the the main reason why we don't have this nowadays is basically we for Modern system like the board we've been running on we don't have the option to do the mathematical proof It's nowadays. It's not possible anymore It's like in the good old days of micro controls you had a fixed frequency You didn't have any caches so you just had to calculate the numbers of instructions of the code path calculate Which code path that you go and it was quite easy to to to calculate the latencies you get Basically the problem you have nowadays We've got even hardware features which are not reliable. So basically we we don't really have the the option to do a mathematical proof But you're right. So with a 12 hour measurement Basically, I can't be sure that I have a timing guarantee So what we basically do is to to hit this point to address this point since we don't have the way of a mathematical proof We try to run Long-term tests and not just 12 hours. So like you you you might have heard about the os adl They they run a a test farm Which is doing like long-term tests about one year two years three years And you can actually get all the timing samples for a long time on these boards This is basically one one point. We try to address this but in my opinion. We don't have the The option nowadays to do the mathematical proof Well, yeah, yeah, so I agree. Well, the use cases might be limited But well, there's there's still we still have a lot of use cases where you might need just a small RTOS system which can be mathematically proof So I agree on that But basically for the huge socs and the complex system like linux. There's no way to do a mathematical proof Well, yeah, actually I never thought about that approach But I think that the main problem is not just the code And once again the main problem we would suffer nowadays in my opinion is the hardware which is which is behind You just need to think about we've got up to three or four different caching levels nowadays And and actually we've got CPUs nowadays, which are reordering instructions for us So basically if we even if you have to prove for the code You still have the problem that you don't have to prove for the hardware And for linux actually we don't know on which hardware the linux system will run at the end of the day So that's basically the problem in my opinion here But actually if you're interested we could continue the discussion in the break Well, yeah, you've gotten a couple of other issues there like safety requirements and stuff But yeah, yeah, I agree But well, I think we're running out of time right here But if you're interested we could continue that discussion in the break So well, yeah, I think time is over. So once again, thanks for listening You