 Welcome everyone. My name is Jan Kiskar. I'm working for Siemens Co-Pagnology. We are doing all kind of crazy stuff with Linux, mostly for products. I'm working with Siemens for 10 years and actually I'm working with real-time Linux now for almost 18 years and it looks like we'll have the pleasure to do it a couple of years more. Today I'm representing actually an open-source project here, which is called Xenomai and about what this project is and specifically where we are heading to with it as a community. So that's again for today. Just a quick jump into Xenomai, what it is, what's it for? The reason maybe you will wonder if you are familiar with real-time Linux, why we need this. What's it for? What's the pros and cons? Looking a bit back and the current status of the project which is, well, they are two-sided. The mid-term changes we are coming with the project, what has to be done, what we plan to do, and what further has to be done. And then also looking a bit ahead, architectural outlook, what the future might look like in some years from now. So what is Xenomai, and if you see my slides from two years ago actually, that's pretty much the same, just in a nutshell. Xenomai is an RTOS to Linux portability framework. So if you are coming from the classic small systems, specialized RTOS-like environment, and you want to run on a set of the R Linux system, this is actually a tool which may help you if you have real-time requirements, or if you have specific API requirements. It comes in two flavors, which is important. So there's one thing which is called the co-kernel approach. We'll talk a little bit about more about this, what it means as an extension for a patched Linux system. And there's also a flavor, which is a part of the Xenomai project, to use it as a tool, as a library set to run on native Linux, including pre-MTiT, to emulate pre-existing RTOS libraries or RTOS APIs this way as well. So the following talk actually is more about the co-kernel part of it, which doesn't mean that the native part is neglected and will be dropped in the future or whatever. It's simply that it's more or less, it works pretty well as it is right now. It's not perfect, doesn't fulfill all requirements, but the actual challenge we have with the project is more on the co-kernel side. And speaking about co-kernels actually, by now Xenomai is the only remaining product-grade implementation of this technology, of this architecture on Linux. If you look back some 20 years, you may remember that there were more solutions on the market. That basically consulates over the years, at least if you think about real, using this thing in a serious development, a serious product. So what is a co-kernel? So this is actually my only graphical slide I have here. Let's give you an idea about the idea behind it, is that you, instead of making Linux as a system completely real-time capable, or at least parts of it, completely real-time capable with all its part, like the print.t approach, thus converting schedulers, converting the internal logs and things like this, the co-kernel is actually okay. This says this system is optimized for certain purposes. We don't want to change it too much. We just want to put something aside, which allows you to keep most of the programming model you have, that you also have with a project system, but have some magic underneath, which allows you to use for a certain set of tasks in your application. A different scheduling means a prioritized real-time scheduling. So in a nutshell, what happens is if you come and an event comes from your device, from your system, from the field and interrupt, these events are dispatched according to their category, Linux kind, non-real-time and real-time, and then they're directed to, when it's real-time, to the real-time core, the real-time scheduler, and that core can actually pre-empt Linux at pretty much every point in time. Later on I will use the term of NMI, so think of it really as a huge piece of software running in NMI context from the Linux point of view. Still, what you keep is what I said. There's only the same programming model, basically you have one process which can live in both worlds at the same time, but one thread can only live in one of these worlds while they can migrate from one side to the other. So that's the co-corner approach. When do you want to use it? So there's actually the case that if you're coming from these autos worlds, and you look at the APIs and the behavior, the temporal behavior, it's kind of special. They do not really map well on POSIX systems, or may not map well on POSIX systems, but you possibly have a large code base which depends on this behavior, which you want to save because the autos no longer maintained or it's no longer running on your hardware and whatever reasons you have. Then this co-corner can actually help you because it's more flexible than Linux in the current form regarding extending and modeling these behaviors in an accurate enough way. If you know where the last code base is, if you change a little bit of the timing behavior, things become interesting. It doesn't mean that the API becomes incorrect, but the application actually may behave in a different way, exposing behavior or incorrect behavior that hasn't happened on the original system. Yeah, but that's one of the reasons basically why we have this infrastructure which makes it possible to model these interfaces more easily. Another advantage that most of our users are interested in is that these co-corner as they are so separate, they have some benefits regarding the architectural structure of the application because you have to think now about which belongs to which part of the system. So with the param.t approach basically you can take an arbitrary application, put some priority in it, and it runs in real time, which doesn't mean it is real time, but it gets all the benefits from the system in the quality the system can deliver. With the co-corner approach, if you call the wrong API, you leave the real-time world and because of the architecture you get a pretty prominently shown to you. That also includes if you did some misconfiguration on the system, which is also easy to achieve with param.t. These kind of things are not really the problem of the co-corner approach because there is this clear separation. So we have some application of these kind in our products where it actually helps a lot. If you have a large user base, a large code base, and a large developer base, that the system tells you actively, okay, you are leaving the real-time world. Is this what you want to do at this point in time? Yeah, and another reason that is depending heavily on when the platform you're running on is the latency and the performance concerns you may have when you convert the complete Linux system to a real-time system. So preempt.t in a nutshell basically means more context switches. That's the approach basically how they made the existing Linux system so preemptible that a lot of things happen in task context and that means a lot of things have to switch all the time just to keep this preemptibility low. That of course doesn't come for free and it may actually cost you something if you have like one of the some of the use case we have if you have a performance sensitive workload running a side to real-time workload, then this performance sensitive workload will of course feel the differences of a preempt.t approach. Another thing is that because of this a lot of activity going on you don't see that normally on an x86 high-end system but you may see it for example on a Raspberry Pi or even lower device and that's also sometimes a reason well if you're too close on the capabilities of the hardware with your deadlines with your requirements then this may help basically to get you a little bit more headspace and feel more comfortable in the field that you're not really about a few microseconds away from the latency of the hardware and of the software stack. Well, I didn't do benchmark on this and this is actually not an absolute design so I've seen a lot of benchmarks on this and now a lot of these benchmarks actually are wrong but I think what was the last thing we have been measured I saw recently some benchmark on a Raspberry Pi and it was basically a factor of two difference on this system Which may mean if you really tune it you probably go down a little bit more But anyway, it is not for free. That's that's a fact and Yeah, yeah preempt kernel or preempt Preempt RT versus Xenomai basically that was the comparison. Yeah, yeah And regarding the workload thing. Yes, this is the da Vinci. I don't have the numbers for this as well But this is simply from from the measuring of yeah, probably is a bit longer ago Yeah, okay. Yeah in the end you have to measure the thing That's that's the point and maybe it's improving further and looking forward to this and as I said This is not there's no black and white So they are a good reason to not use a code code and I rather go for preempt RT So first of all if none of these concerns are just brought up is a concern for you So for example, if your deadline is One millisecond and the latency you get from the system is 10 microseconds or 30 you don't really care What approach you do underneath and yeah, that's good enough. I would say Also real time is not over often real time and the sense of hard real time and you really talk to your user and your scenario So there's also a good reason to go with even standard Linux on the problems And also important criteria is as I said when the application their real-time part is actually manageable There's a few skill developers working on this. It's a well-defined interface. They interact with the system Then it's also a good choice Yeah, and nothing comes for free and the co-carnals have some yeah maintenance and some integration Challenges we will talk about this more and that can also be a reason to not go this path But in the end keep in mind in general a real-time requires maintenance and also the preempt RT approach Requires maintenance and will require maintenance. It's a has been a hot topic these days around Hopefully it will be solved and we are also investing a Siemens into this topic But this is a general problem. We will see later on that. This is actually a general problem of the community Yeah So looking back a little bit on cinema I run through this quickly cinema has a history by now of 17 years Started in 2001 as this portability framework looking back then and it's sometimes the reason why to being confused with our tie Looking for a real-time capable baseline to run on on Linux and that time it was the choice to go with our tie So there were a lot of technology involved like a day or special we know this again If you're looking in this a long time our tie fusion was a the branch of our tie to work towards the center my way Which then ended in 2005? with the release of cinema 2.0, which basically Yeah, I realized that the two projects are two different their goals and their way of being maintained and Xenomai went its own way and for different Applications and in a different way of maintaining things It evolved pretty quick from there Ported to a lot of architectures By now we are less we will see And yeah by now we have the Xenomai version 3 released now three years ago After quite some development that basically reworked the core of the Xenomai and towards Just doing the the POSIX Co-kernel approach and the kernel and doing all the rest like the other APIs if you want to emulate for example Vx works or Uitron things like this. That's has been done now in user space and It was randomized three also There's a four mentioned support for a four native Linux came in and it's actually being used in quite a few products Where you want to model the behavior of a previous AtoS system just on plain Linux and yeah, so I recently attended a talk at FOSSTEM and About how to build real-time Linux systems of printer T and Xenomai and in the end someone in the audience asked Why the way who's using actually Xenomai? And yeah, what I had to step up because even the the speaker wasn't able to provide an example and mentioned one of our examples and actually there are many Out there in the field being used for quite a while in the area of machine control motion control system PLC's Printing machines so the long-run Systems for printing newspapers for example Also printers copy us from Xerox and 3d printers. We have it in the room here networks switches Last but not least the magnetic resonance scanner the example we bring for quite a few years now so if you ever have to go for a examination and MRT and Siemens branded you will be examined not only by Real-time Linux based on Xenomai and the robotic research field is it's pretty popular for quite a while and out of this research Sometimes they're also becoming product out of this And what I recently learned what we recently learned is community and the NASA is also using it To plan to go for space with some ends on maybe a mass, you know Yeah, this is basically what is known and I don't have much pictures here actually because well We have a large shadow user base and that is actually part of the problem of these project So one of some of these projects are known to us as a maintainer with our direct context So recently I learned there are some autonomous Logistic vehicles running around in shop floors and in factory floors driven by Xenomai And we know that there must be more simply based on the fact if you listen around if you talk to people how Well known this is and how many also consultants are providing services to implement it and to bigot into existing products so walked around a Couple of years a couple of weeks ago on the embedded world fair and in Germany And if you if you just watch the booth and what was printed there as technology being offered Xenomai was often on it if you talk to XP for example, they have a BSP demonstrating TSN on Xenomai basis And in generally it was pretty well known that it exists It doesn't mean it on every of these providers they have Xenomai in but quite a few have that used or are still using it And so and if you look at our subscriber list is pretty long and there is this user base behind it That is not really visible if you just look at the web page We just look at the mailing list and see well what are being produced there So you may think this must be a healthy project given this large user base We have given this specialized domain Well basically if you look at what who Contributed most and left out some people here, but they had less contribution to the list in the past past five years Only to this project You admit some make some split here though the two top most are Philip the car maintainer and Jill Myself being working on this as well and some of my projects and Further people some of them Also for for business exit most for them actually doing it for for some employers But unfortunately also quite a few and specifically the two maintainers doing it partly not on the budget of someone's project or someone's bill And well these are six people and that makes a system which is critical for for quite a few products and What happens if you have that many people and my life can be hard and unfair and and it had been unfair to us so Two years ago. We we lost one of our maintainers Tragically and that of course creates this scenario. We all talk about If you're in a large project that Someone is missing now not only as a person as a human but also as someone contributing to these large copies And that cost us now the Make us realize basically that we have to talk about this openly That there is now a gap for one person remaining as the main maintain and the main contributor This project is too large and too critical to keep it running So for us as even that was clear as a consultant role internally We have to talk to our users early enough Before this problem becomes too real means that there's no support anymore What to do and that what we did internally first of all carefully Because it's always about if you if you say okay, you have a problem and people may run away And that the problem just gets bigger for those who stay on who can't run away easily What should we do with the two main users we had at Siemens migrate away or invest into this? specifically if we do something this has to be coordinated but underneath our own users first of all and And after not that long discussion actually became clear. Okay. Yeah. Well, we have to invest in this We still may migrate eventually and then not all the scenarios may stay the same forever That's clear But for now if you have a product out there now a field for 10 years sometimes You cannot do this decision from one day to the other. So there's no choice anyway so for us the decision was to invest in this and We also went public with this and along this actually there was this famous post of Philip on the meeting list About arty net analogy in the elephant in the room Where he also clearly stated okay folks based on this concrete example of the drivers and there is a problem We have to do something about it let's try to start this process of thinking and of Getting the input from our shadow user base from those who actually can't simply go away do something else And that actually already erased some awareness and and brought them some first commitments, but still there is more to do on this Yeah, and that was not only the only change coming up Just a couple of months or two months later Philip went forward with an announcement. Of course, we knew this internally beforehand that he's going to step back from the project Lead and so when things come together They come at once so But fortunately he will not completely disappear. So he will continue to support the project with reviews and working on specific tasks But he wants to concentrate actually no longer on this tedious understandably tedious work on the forefront Which invested a lot of his personal time on this but rather work on on parts again where you can concentrate on like a new Colonel Colonel architecture. We will see later on what it means To get his head and this is hands-free for these tasks. So fully understandable Not the thing which happened overnight surely and not an easy thing Because that means for the maintainer to look for someone else So he asked me to take over the project as I've been involved for quite a while and as we're also representing a One of the important users of the project and the contributors Which also is not an easy decision to take over yet another Baby to look after But yeah, the decision for us was then and for me as well. Okay to take this burn Trying to do my best on this we will see and Yeah, this the switch basically hasn't taken I didn't take in place yet. So we plan to do it around summer latest autumn Still did to wrap up some things and and yeah look into How I'm get more familiar with the corners. I haven't looked in in the past Well here's because they weren't relevant for us directly, but they are relevant for the overall project and We also discussed How we can change the maintenance of a critical component the patch we have on the on the kernels And how to split up this work because this was also a major time that was aton up by Was required by the maintainer to work on in the past specifically was doing no longer being with us So the new split up basically is that Philip will continue to work on the arm side We have a maintainer for arm 64 Dimitri Nicely and we have also some architecture maintainer which we didn't have on the archer on the again directly power PC 22. That's this The NASA activity as I mentioned before so Stephen will look into this Together with my colleague. I will look into the x86 part and Philip will also be still available for integrating These patches so we have separate trees, of course And we will then merge them to to one patch and that's basically still Phillips role in this yeah, actually as I mentioned that the iPad is the critical part and For the maintenance effort on the daily basis That's always if you're going out of tree and this is unfortunately the case for for this approach That is a challenge that means catching up with developments and Adapting completely all the times the things and that the current situation basically about this is that we have well fairly limited support Which is feasible given the amount of resources we have currently a 4.4 is supported With well, it might be limited in the long run next day six an arm We will see we have currently four at nine with all support and some business right now But it's going to be fixed soon for that 40 is ready except for x86. So It's our part, but we will also look at do this very soon Yeah, and with these many things you also have to look into emerging of course table Versions into this that may lag behind if maintainers are busy with other things We already cleaned up our portfolio a little bit So in the past nice to and as age was dropped We are about to drop officially it has been announced already power PC and and black fin And we also agreed recently during a community meeting that arm Architectures below a version 7 will also no longer be supported Yeah, but What is the major change now upcoming and Philip is was working on this and providing a good ground now to work on So previously the patches have been basically a blob and also that was a problem for for someone to jump jump in and understand What's going on there? So Philip refact that the patch queue and and made it more understandable just by creating logical increments that At the features that are needed to do with the current architecture the Enabling the iPad enabling the Co-corner enabling in the kernel and the idea behind this patch queue is in the end that you can take it and more easily Ported over to whatever kernel you need if it's a vendor kernel unfortunately Or if it's a different version of the Linux kernel or whatever if you are not happy with our pace of Maintaining things you can take this patches more easily and map it on on your own kernel So it's an implicit documentation We also agreed during this user meeting that we need a new policy regarding how to maintain the patches So the agreement was that we only maintain in the future on one version The latest LTS version Which doesn't mean necessarily the other kernels will be abandoned, but that depends now heavily on the users of these patches So one of the users is Siemens and we are currently building products on 4.4 kernels So we will maintain this kernel at least for our architectures. That is a commitment And we will also move over this maintenance queue soon on the super long-term CIP kernel To keep the maintenance even longer running So there will be a couple of years coming with this kernel for us And of course you want to share that and we want to get the input from other users if they want to go for a long-term Maintain system and the field that is currently the choice Anyone else wanting to jump in of course, this is open and Well, we will support of course the best we can on on further work on this and try to consolidate activities on this So so far for the kernel part or for the patch part So to say now for the the main release of the project So what's the current situation? So we have two releases out there three at zero and three dot one upcoming not it out there So there's three dot zero this current stable release branch We have a new stable version soon to be released There was some fixes pending and some review for this for the networking driver stack That's now merged now just sorting out the last four lots of the patch Updates and when this is done hopefully in a couple of days only We can go out with this another stable version 3.1 is in the queue. So there's this next branch currently which prepares for this It's a pretty good shape and actually in that good shape that we internally have back ported some of the new features to our stable branch by Now and then putting this to a customer where it's what's needed So there's a good base to come Like the improvements on on pre-o sealing support and and for mutex's and fast schedule implementation Set schedule implementation, which avoids this calls normally doing runtime And we'll also introduce a new architecture the arm 64 Only thing what's basically missing from the current point of view from the criteria is Proper support for the latest LTS kernel of course all architectures But once that is settled, I guess under some new requirement shows up in the last minute I guess we can go for 3.1 release then And the future then of course is open There is no concrete plan now for what exactly the next version will be about That's also the input that we expect from the community, of course What exactly you are looking for what would be improved except for the normal housekeeping improvement things One thing to remember at this and there is something like said on my 2.6 or even older out there this thing is Unmaintained so if you should have something in your product or if you should get something from a BSP vendor hello, LXP This is a dead parrot seriously driver stacks so at in it comes not only with the Kernel patch with some user-based libraries. There are also some drivers Which are enabled for this co-kernel architecture? So what we did recently we have a networking stack included a small one To target specifically hard world time is that communication that has been refreshed now as I said for 3.0 0 But this thing needs more love I once maintained a longer time ago for various projects and I'm still surprised that this thing is alive and been used There is some legacy inside And if you're really interested in using this as a base, I would really recommend to talk about with us What you want to use and how we can improve this infrastructure and refocus it? I mean this dates back from the time where Ethernet was based on hubs and coax cable To address the issues you have and you don't want to do real time over this And so there are some aspects which are no longer apply at the same time everyone is talking about TSN these days There are some new aspects now which might need some integration as well and Has some potential as we saw with the nxp Enabling so there is there's some chance, but there is also some some more work needed specific on the driver side and Drop out some old ones. I mean the 100 megabits very old Hardware is no longer available even on the market So we can drop some stuff, but we possibly also missing some some new ones would be interesting for some products The other parts you are GPIO, SPI and can well they currently look good good good enough If you have a use case look into this make sure it's tested There's still some yeah the movements possible and the other area. This is the analogy stack which is for analog IO similar to well, not that similar to the Comedy stack that's still in staging and mainline Unfortunately, this thing is often for quite a while and If you really want to use this thing That's time now to stand up because otherwise we will have to drop it Without a maintainer and possibly also without a user doesn't make sense to have this in In the nutshell we need for these things someone who feels responsible Because it's used in some products. It's used on some boards so step up and Provide your your feedback or even better the patches for these systems to keep them alive keep them tested as I said We will drop un-maintained or broken drivers Of course prior with a prior notice, but we can't really keep them up And if they break continuously the build it doesn't make sense to have them in set Yeah, one of the things we do have to look into now Also because the loss of Jill is the the infrastructure has to be restructured and partly also rebuilt So one thing that's changing also to Offload for the task from from Philip is that we will move the hosting underneath you probably not notice that much of it To to thanks to the gate did lap at banks. So many thanks to them for offering this We are also discussing now how to re Animate certain CI infrastructure. There's a little bit inside of all of our users but there's not much publicly done or much transparently done So one of the option is go with an offering of tanks. We have on this It's a limited of course because this takes some yeah computing resources On the other hand, it would be beneficial to have a public CI system running So I played a little bit with it So there are some option actually busy systems out there in the field to get for free Even with the free offerings and to enable them and do some build tests and maybe even some functional tests and related environment So that users can also do their own tests prior to submitting patches So I tried Travis CI recently as I was for septic real if it's enough for doing really a complete kernel build But even without stripping much down It was sufficiently fast enough on these machines to get a kernel and then possibly do further things So this is a good area actually for contributions because it's not really requiring to be a kernel heck on this And still it would create a lot of value for the project Of course another topic is an on-device testing which is happening, of course and many user sites in the basement Or in the product fields, but it's not happening in the public So we we want to define some reference board for this Currently what's being done is mostly manual testing. So again here is contribution welcome specifically as long as we don't have a The best infrastructure for this in place There are options to reuse the test from of tanks for this But probably more possibly also interesting is to look for something more distributed To make it easier for for those with the product use case behind it to hook up with the infrastructure and have the Products on the field running against a new version continuously So this is like a typical topic everyone's talking about these days like a lava deployment where we then of course could easily Should easily reuse some of the existing initiatives like what's going on with a GL and CRP and others Yeah to hook into them at least to reuse the patterns that they have to build up some own infrastructure on this That's still a bit fuzzy. But as I said, this is actually an important area and again an area where Contributions are very welcome and shouldn't be too hard to start with some at least initial steps So now for something not completely different As I said, we are also looking ahead in the future what may come after the current architecture and what is now the motivation for flip to focus on something else It's not completely something else. It's about Thinking about okay and realizing and that the co-kernel plays its role in the in the ecosystem It plays its role in the in the system we have out there But it's not optimized in the way and not designed in the way in the current form, which is optimal for the Linux integration So there's an activity as I mentioned already two years ago, which is called dovetail and then and steely Which has the goal to improve the integration of the co-kernel architecture and Into the Linux kernel into the native environment so this is basically The goal is basically to to integrate in a way that it really feels like Linux subsystem like preemptive He is doing in the sense that you don't notice the major difference And that you don't notice that there is something alien patching into it But really it's being there and it's using as much as possible from existing infrastructure In contrast to this the current architecture really is about abstracting away the kernel So we try to keep things separate so that we can easily move between the different kernel versions in theory It works quite well actually but still that comes with the price It means that you have a lot of layer of sections Which then not necessarily are intuitive to understand while this new approach should make this easier and Help basically was keep it more maintainable and easier Manageable by someone who is not deeply familiar with all the details and all the history of 20 years Which is important as long as we are off tree and and last but not least This is also the only chance if we ever get this and to really propose something for upstream Which is not completely unrealistic simply because well in the use cases made clear and there is a user base and there's an interest I Guess there's a chance to get these in but of course this is not happening the other day that will take more effort on it So as I said this new approach consists of two core elements. That's the dovetail That's about the interrupt routing as you saw on the first graphic and and the Co-kernel hooks so that Yeah, the thing basically to enable Linux to to run a second scheduler That's roughly what the current iPad patches about from the functional point of view and then there's something called steely And that's actually that the new cook co-kernel Implementation so the user of this infrastructure That corresponds to what currently Xenomai cobalt so the co-kernel implementation of Xenomai is about This is ongoing development very important This was very important also from Phillips point of view to mention this It's not coming center my compatible You can't switch from one day to the other to the new version and things will just work it's not product ready and don't use it to fly to Mars and Specifically mentioned this because something funny happened with a 3.0 release period was Published that actually after he was working a lot on these things and rebasing stuff Someone came to me and said and can you please stop this rebasing and we are using it in product This is not in the stage that this would happen While we will see it's running actually, but it's not really Stable and and there's no yeah commitment on it That's going to stay in this form specifically as I said as we are targeting to make it palatable for upstream that naturally brings in Further change. We can't predict if we should ever propose this that may completely break certain internal interfaces on external interfaces To make it in the form that X up simply accepted So looking a little bit into into dovetail. So this is the interrupt pipeline It can does the same thing as before it prioritized like the interrupts and and make them Yeah, and am I like so to say For my next point of view. So this is what what Philippa describes as out of band work. It should be done in this context of these interrupts It solely builds now on the abstraction of the yaku chip So it's naturally integrated in Linux and even reduces by now the existing locking of Linux for this which makes it Well better maintainable less patching and it even allows to use with the lock depth for this And then there is the feature of task stealing that means it's the the hooks required to take an existing Linux task and Remove it from the scheduler. Basically suspended for the Linux scheduler and hand it over to someone else the coast colonel scheduler And again return this and that during runtime. So depending on the state of the task and Furthermore, we need some event hooks in the kernel to propagate things which are relevant for the real-time execution like Sus calls like false or signals coming in. So normal politics Linux signals You could check out the code on this URL and there's also a little bit of Documentation are available for the architecture and for yeah, what you can expect from this right now from the dovetail point of view Steely that's now the as I said that the politics compatible at or score and user of this dovetail implementation entry Yeah, it demonstrates basically how these things could be used And it's clear that we never get dovetail into the main line without having a user of it So this is naturally comes together And it's yeah, it's a fundamental rework of the cobalt core The current external my core it has more fine-grained locking In contrast to the single lock we currently have a cinema 3 which is simple to manage and easy to validate But has some limitations of course when it comes to larger systems So something about six cores where they're actively used for real-time. That's something you don't want to do right now with Existing kind of my approach It also reuses more of the existing infrastructure like clock sources that of course makes the portability easier Also enables something like frequency scaling during runtime and if you want this for whatever reason it's possible now It's no longer breaking our timekeeping And yeah, well compatibility to cinema is not available right now But could still be added later on as it is in the core is stable then you can look into these kind of things Interesting topic remains and that's of course how you deal with the real-time drivers If they want to be used in the co-kernel context, they need some awareness So currently only minimal set is available just to demonstrate what's possible But on the longer forces a question how to How to enable drivers how to keep them maintainable and then make them usable Yeah, that's something you look at but yeah It remains challenge in general and we will have a talk on I think on real-time driver design for for mainline as well the other day So it's not a just topic for for preempt for Xenoma it's actually a popular topic for preempt at E as well though with different approaches to work on this Yeah, and then of course the thing has a user space environment so the libraries to actually build and lick against it This is the URL here. It's used. It's very similar to Xenoma 3. It's just focused on this new implementation With that you can actually now play with it And if you look at what what happens with these refactoring and it shrink the code massively About factor 2 on the on the IPa in the kernel part and even more than 50% right now on the Xenoma core part Still that remains a beast You have to state it. It's still large and fortunately the majority of it And it's a statistic with about here is orthogonal to the existing kernels. So only a few lines are changed Of existing code. We have I think a few 10 lines change in something like the critical scheduler area So this is mostly harmless mostly harmless. So it adds some code Yeah, as I said, it's working. It's working on arm right now I make six and seven is working some demos and there is arm 64 support work in progress Check it out. Have a look Don't build a product, but maybe you want to try it out So with this Let me summarize and so I'm pretty sure and so is Philip and so are our users. I think That the co-kernel is there to stay It's being used for quite a while in production It doesn't disappear despite all the great progress on the prem.com site So there are some use cases of this niche area in real time where the niche area co-kernel plays a role and will play a role In general, we have to state That the industrial usage of real-time Linux and probably not only real-time Linux has a problem And there's an in healthy unhealthy imbalance between take and give that's clear And we really have to work on this and if we map it on the Xenomar project That is really a painful aspect of this and open source doesn't come for free It has to you have to commit to it in one way or the other and there are many ways to commit to it And to make it to keep it alive. I mean, it's it's the The branch you are sitting on basically and if you do nothing about it, this may crack eventually and yeah Then you fall heavily with all your products out there in the field And that means for us specifically and we are always looking into for for people who are declaring they are using it Not maybe in all details, but at least so far that we know, okay, there are users out there They have a certain set of requirements. They have a certain of expectation We need the feedback and that we need this publicly And that starts with testing But it doesn't end there That means when we release a new version we we barely get the feedback on this It's because when the passes was painful for flip to see it. Okay, I'm releasing a C But what's being tested is the final release as usual and that's not good That doesn't that doesn't solve the problems. It just creates the problems in the field So we do not necessarily need although we would welcome a lot more kernel hackers as this is a core part Of the project and but as a outline there are many many areas where even small contributions would already make a difference and Keep this project alive and make it even Easier to keep alive in the future So with this I like to thank you and I'm open for questions or comments or remarks or whatever you have on it Yeah, so the question is about what what adaptations are required for the device drivers also under the new Dovetail Steely approach so conceptually they are similar to the existing model I mean at the point where you create these two worlds a Driver which has to wash it should I live with a certain feature set of shirton service set in the in the co-kernel in these NMI Sideband or out of band work load has to be aware of this So it has to use their the right APIs Which are enabled to work in this environment. It hasn't to call into the other world an uncoordinated way That is Well, it can be expressed right now as we have we have some driver abstraction interface Which is probably not optimal for this But at least contains for example the model that there are possibly two entry paths into the driver One a non real-time and the other real-time and you want to have well the right handlers in the right context working That could be a model But of course it means that you are changing the way a classic driver is working under Linux Which has only one path and when one drop to do it doesn't change the fact that even a classic driver It wants to be usable actually for real-time requires a certain architecture It doesn't call into something which like a work you which has an undefined behavior or an unbounded latency behavior That these kind of pattern do not show up as well While sometimes they are in contrast for optimizing for throughput and real-time is not about throughput. It's about the latency thing So these kind of challenge you have is a classic driver as well But the encoding of this how you manifest the API for that and how you keep the API maintainable That is then the challenge and well, there is no easy answer to this right now quite a few of our users I'm sure they are just developing they are on-demand drivers for their specific products very few of them from scratch or by Copying over existing code focusing just on the case that they have this is of course not optimal This is the classic fork approach But it's often enough and it's for some of them actually it's a reason to go this way I had a discussion with a friend of mine on this topic and and he wanted to ask me some cinema stuff And I was always pushing back and said oh come on. Why don't you spread that he it's so easy for all You are only have a few people working with me. Come on. Can't stop this discussion I want you to answer my extent of my question because we know why we're using it because we know that we can't handle with our Existing resource and knowledge the complexity of a system like supreme 30 When it breaks so we will need additional support, but we feel more comfortable in the center my world That's as its own problems, but for our use case. This is the right thing and and they will go with these separate driver approach, I suppose further question so otherwise What but we definitely want to do and we besides giving these kind of talks and then trying to get the feedback actively in this way We also thinking about having another user meeting what we had now a small one In at around foster and there will probably one if things will work out in an autumn as well to just to collect this input from the users and and Maybe give some further hints what things are moving to but also give some hints from the users to us what they want to have so open for any Suggestions or any yeah Willing us to join this definitely the meeting this is open for this Drop your raise your hand if you're interested in this kind of thing We can make it happen. We had a very nice meeting a couple of years ago in in Dresden back then Which quite an interesting folk? Folks that they then that would be nice to have again And that's definitely what one offer to the community as well otherwise Thank you a lot for attention