 OK, thanks for coming, especially because it's lunchtime. And I have a lot of content for today, but I hope we can get through it in time. So my name is Jesus. I'm here today to talk about time-sensitive networking and how we are enabling kernel interfaces for those type of systems. I'll start with the cheesy questions, of course. Just need to know a bit more about you guys. How many of you are familiar with time-sensitive networking? Wow, OK. How many of you are working on companies that ship products with time-sensitive networking on them? And how many of you have used Qdisk from the traffic control subsystem on Linux before? OK. OK, cool. Well, I'm a software engineer Intel at the Open Source Technology Center. I've been working on the Linux network stack since mid-last year. And I started on this project, so I start with a TSN work, basically. Before that, I've done a lot of platform enabling, and I've done a lot of embedded programming as well. And on the old days, I used to be a WebKit committer and also a committer of the Q framework. So I have three main objectives for today. I want to provide a very brief introduction to TSN for those of you who are not familiar with it. It's a very extensive topic and a somehow complicated technology, but I think we manage. And then I want to give you an overview about the work we've been doing on the Linux kernel. And then just a brief discussion about what I think the challenge ahead are for us. OK, so let's start with the introduction. So we're all used to these networks that we have everywhere nowadays, like local air networks or the internet. Everything is best effort, and we care a lot about speed and throughput, and metrics are all based on average. So we have average delay for everything, or average bandwidth. Those are great networks. I mean, we're using them ever since and everywhere, basically, but they're not very suitable for when you have use cases that require high or known availability. So for instance, like the circuit switching networks or control networks. And that's why we created a bunch of field buses in the past. But then we created something else recently called time-sensitive networking. So it's a set of standards developed by the I3 PoE. And what we're trying to do with that is enabling internet-based lands to be able to handle time-sensitive traffic in addition to best effort traffic, to background traffic, to everything that we are used to. So it started as the audio-video bridging, AVB. I want to guess five or six years ago, maybe more. And then it developed, and it was renamed to this new set of technologies called TSN. And I think that's one of the main takeaways here, is that TSN allows for both time-sensitive traffic and best effort traffic to coexist on the same network. And the whole point of TSN is making sure that the time-sensitive traffic is deterministic. So we have to provide a bounded worst case latency for it. So that's another takeaway. The standards that comprise TSN are mostly developed as amendments to DOT 1Q, which is a standard that covers VLANs and QOS from the I3 PoE. And there is a community behind, I think I can call it a community, behind TSN is the Avenue Alliance. So all the members are the companies that are interested in TSN and working together to make sure that the technology is interoperable. TSN targets different segments. So it just makes things a little more complicated because it can be used on different market segments. And each market segment has a different set of requirements. And the technology has to make everyone happy. So it's used on ProAV, on industrial control, automotive systems, and all of that. And I think it's used on consumer products as well, or some companies have been starting to use that recently. So even though it's been used on lots of segments, I've chosen an example today that I think that covers different requirements to TSN. So it's one of these next generation cars, or today's cars, actually, in which you have all these fancy infotainment systems with all the multiple screens and speakers. And you need to keep video and audio synchronized. And you may add to, I've heard cars that now have noise reduction inside it. So you have all these multiple microphones on it. And then you're doing all noise canceling. And you have to keep everything synchronized. And in addition to that, you have the control network. So you have all the sensors for autonomous driving, and even for just parking, and all the actuators, and all of that. And now they want to use internet on these systems. And all the traffic, as I said, can coexist on the same network. So it's a very complicated problem. But why TSN? I asked myself that question nine months ago. Why are people just moving away from the field buses and trying to use a new set of technology? Well, it turns out that internet is super cheap. Internet Mac is cheap. Cable is cheap. And it's already everywhere. And I was told that cable, I don't know anything about cars, but I was told that cable means one of the most expensive components when you're building a car. Because you have different types of them. And it's crazy. I've seen a picture that I couldn't show here before, but it's crazy. And theoretical, as I said, we can have all this traffic living on the same network with TSN. I'm saying theoretically, because I'm not sure if people are doing that already. I work for Intel, not a car company. But I think they are. But how does TSN enable that? So I put up this diagram here today that I think that covers a very common scenario in which we have a regular network. And then we have the, I was calling them legacy before, but it's not legacy. It's just a common end stations with a common network card, or let's say non-TSN capable. And then you have the non-TSN capable switches as well. And you have a TSN capable end station here who's connected to the network, because everything can just coexist. But now here we have the TSN domain on this network. So what does that mean? So the mechanisms behind TSN are actually quite simple as an outline. First, like on the TSN domain here, what you can notice is that all the clocks are synchronized. So we have the same time domain for every single end station and for every single switch. And then I said that time-sensitive traffic and best effort traffic both have to be able to coexist on the same network. So somehow, I know that one traffic has a higher priority than the other traffic, but so somehow the network has to be able to identify such traffic. And for that, so traffic identification, the network uses villains for that. So that's how the network's going to know about how to prioritize one traffic over the other. If I now have different traffic types and I have to prioritize them, then somehow I have to allocate resources as well on the network. So my end station and my switches and then also the receiver end station, they all have to agree on a path on priorities and which villains to attend to. And then I have to allocate all the resources along the network. And for less, to make sure that all the traffic, all the time-sensitive traffic is behaving well, we have to shape traffic. So we use traffic shaping for that. Well, it's a bunch of mechanisms, and putting them together is actually quite complicated. So somehow we have to be able to configure the network. This is out of the scope for today's talk, but I just want to have it captured here. So the standard that's trying to put together different ways to configure the network for TSN is QCC. And there are different mechanisms for that. It can be done dynamically or statically, or you can have a central network control, or it can be all distributed. As I said, so traffic shapers, they play a key role on TSN networks overall. So traffic shaping, so how many of you here are familiar with traffic shaping? Yay, OK. So traffic shape is basically, well, the way I see it is basically bandwidth management. So it's a way for us to distribute traffic evenly in time, but in simply. I said before that TSN applications, depending on the system, you have different requirements. So for some systems, if all you have for the time-sensitive traffic is a reserved bandwidth, that's fine. But for some other systems, we have more higher deterministic requirements. So you need strategic cycles because the transmission of packets are going to be scheduled. TSN has quite a few shapers, and there are new shapers being developed. But I think the main ones and the ones talking here today are the QAV, the credit-based shaper, the QBV, the scheduled traffic, and another shaper that's we call it the time-based scheduling. So QAV is a per-Q shaper, and it basically provides you bandwidth. So if all you need is that for a given traffic class, the packet rate or actually the bandwidth never goes beyond a certain threshold, that's what QAV is for, so the credit-based shaper. But let's say you need more fine-grained control over the packets, over the transmission time of the packets. So for that, you use a time-based scheduling shaper. So you have a per-packet transmission time, and then the shaper controls when that packet hits its deadline. And then it can go into the network. Let's keep this question for later. So when we are using TBS, are we talking about not earlier then or not later then? But I'll save this question for later, OK? And there is also QBV. QBV is enhancements for scheduled traffic, and it's a purport schedule. So it's a full schedule of every single transmission queue on the system. Right, and yeah. So there are others. QBU, for instance, is frame preemption. And QCI, that's not even a shaper. It's more about doing field training on Ingress. And there are more complicated shapers coming along. But today, as I said, I'll be focusing on QAV, because this was the first shaper developed. It's from the AVB area. And I want to talk a bit about time-basely scheduling, OK? Because these are the two shapers that I've been working on on the Linux kernel at the moment. So I think at this point, you have figured that the focus here is not on switches. But I will be talking about end stations, OK? My work is all on end stations. So I've put together this diagram here. You can see it well, OK? So as I said, an end station on a TSN network, must be able to handle both time-sensitive traffic and also best effort traffic, so background traffic. So imagine on a Linux box, you may have... So everything that is purple on this presentation is a time-sensitive traffic, OK? So you might have a talker application. That's a terminology we use on TSN tests, at least. So you may have a talker application, and you may have an internet streamer application running there as well, like best effort traffic, right? So packets must go through the network stack. I mean, it's Linux. And they must get to the network card. And it's a requirement for network cards that are TSN capable that they have at least two transmission queues. So one is used for time-sensitive traffic, and another one is used for best effort traffic. You may have more. The more, the better, actually. And on each one of these queues, there will be a transmission algorithm running, right? And the transmission algorithm is actually a shaper. So it may be that here you install the credit-based shaper. And I mean, this is best effort here, so it's probably a strict priority. Or you may choose to use the time-based scheduling shaper here. And if this network card is compliant with QBVE, then you may have a gate schedule running here that allows the gates to run at a certain point. So if you look at this picture here, from the perspective of the talker system, basically what we have is somehow we must be able to enable the multi-queues, right? And then I want to be able to configure them, each shaper, individually. And then somehow the kernel must be able to classify traffic, because now you have time-sensitive traffic. And then this traffic has to go to the right queue. So we need a mechanism for classifying traffic so we can steer it to the right transmission queue. And of course, we have to be able to transmit traffic, right? So this breakout here may sound actually super simple, because it is. And this was the ground. When we did this, when we started with this breakout, this was the ground work for when we started designing the interfaces on the Linux kernel. Any questions here so far? No? So after this introduction, so I'll now talk about the work we've been doing on the Linux kernel. We didn't start it from anywhere, right? We first had a look at previous attempts. So there is a network engineer from Intel, Eric Mann, six years ago, I think, on the Linux plumbers. He made a very good presentation about TSN. And he wrote a demo for that. And it was called OpenAVB. So he basically forked the driver, the IGB driver, for the Intel I210 controller. And then he didn't want to spend a lot of time creating kernel interfaces for configuring the shaper. So he basically just bypassed the entire kernel. And then he exposed all the transmission queues and the registers to the user space through a library. That was a demo. And then this actually became a very big project. And it's still used, today it's called Open Avenue. I'll talk more about it later. And it's used in quite a few products out there. I was surprised to know that. And then the first guy who actually tried to enable TSN interfaces on the kernel upstream was an engineer from Cisco, Eric Alsted. I'm not sure if I'm saying his name correctly. I'm sorry if I'm not. And he did this very media-centric approach. So he was very focused on the AVB side of things. And he bundled everything up as a TSN driver. So he exposed the configuration file system interface for his driver and an OSO sheen. So you could just stream time-sensitive traffic over the network. It was a very nice job. And he did two iterations of his work. But the maintainers didn't like it because it was very bundled up. Let's put it this way. And then we also found out that there were a few drivers upstream that are exposing the shapers configuration through the device tree. So this is all very hardware-specific. So just some downsides of these previous works. So they were all working. But they were all hardware-dependent or doing kernel bypassing. Or they were very two monolithic. And so I mentioned traffic shapers a lot. And when we noticed that there was no upstream support for TSN, we said, well, Linux has a traffic control subsystem. And the traffic control subsystem already provide interfaces for shaping and scheduling and policy and all of that. And the components from the traffic control subsystem on Linux are basically a key-in disciplines, so Q disks and classes and filters. So Q disks are, for those of you who are not familiar with it, they're basically packet buffers inside the kernel. So they live between the protocol families and the device drivers. So there are kernel buffers for packets. With Q disks, you can control how or when packets are transmitted. And every network interface has a Q disk attached to it, at least one, a root Q disk. And they can expose inner classes in which you can install children Q disks. Let's put it this way. And also, in addition to that, Q disks can offload work to hardware. So when we learned about the internals of Q disks, we said, well, this is actually a perfect match for what we need with TSN. So for I just said, so TC is the common line interface for the traffic control subsystem. It's part of the IP route suite. And just as an example here, if you just try to list all your Q disks now, and you have a multi-Q interface, then you see a bunch of children Q disks running on it. So OK. Sorry. And remember when I was talking about the end stations, I mentioned that we broke down the problem in four major steps that we needed to take. The first step was we need to enable the multi-Q, right? And then when we were doing some research, we found out that there was already a perfect feed for that. There is a Q disk that can be used as a root Q disk called MQPrior. So it's multi-Q priority. And we decided this was part of our solution. So we started using it. And basically, what MQPrior does is it exposes all the hardware transmission Qs as classes. So you can install all their Q disks on every single class. And in addition to that, you can create a mapping between priority to a traffic class to a transmission Q. It has a very complicated common line, in my opinion. But that's what we have. So basically here on this example, what I'm doing is I'm creating three traffic classes. This controller here, so this interface on my machine, I'm using an I210 controller. So it has four transmission Qs. But I'm creating three traffic classes. And then I'm creating a mapping here. And basically what I'm doing is priority 3 is mapped to traffic class 0 that is then mapped to Q0. And then priority 2 goes to traffic class 1 and then Q1. And then everything else is going to be mapped to Qs. I always get confused about the numbers here, 3 and 4. And then if you look here and you dump the classes, then you see that it has one class here for one of the Qs, another class here for another Q. And then this is the third traffic class. And it's attached to transmission Qs. And then on the breakout, the next thing was now that we have exposed the transmission Qs, we have to be able to install the traffic shapers that I want to use on every single Q. So for the credit-basic shaper, there was nothing out there upstream. And so we designed this new Q disk called CBS. It was merged recently on, I think, December or November. So it's part of the kernel 4.15 already. And as part of the patch set, we provide a support for the I210 driver. So the CBS Q disk, it provides hardware offload. So you can just offload the work completely to the controller if it has support for it, of course. Or you can use a software fallback. So if you are using a controller that doesn't have support for the credit-basic shaper, you can still use the Q disk and have credit-basic shaping being done inside the kernel. It's a software-based effort, right? The configuration parameters are all derived directly from the standard. So we didn't invent this. So you need the low credit and the high credit and the send slope and the idle slope. And what the idle slope is, basically, is the bandwidth that your traffic less requires, right? And then you have a parameter to dial the offload option, basically. So on this example here, I'm installing the, so I'm using, so I configure MQ prior to expose the Qs, right, with three traffic classes. And I'm now installing the CBS Q disk onto the traffic class one, OK? So I did it. And then if I just dump the classes now, then you can see that CBS is here installed. And then the next Q disk we start working on was for the time-based scheduling. Again, there was nothing up there upstream. And we start to work on this last November, I think. And we start to work on that together with Richard Cochran. He's the PPP maintainer. And this work is comprised by two different interfaces. So one is the TBS Q disk. TBS stands for time-based scheduling. And another one is the TX time socket option, right? So first talking about the Q disk, again, it provides hardware off-load and also a software fallback mechanism. So if you want to use transmission-based scheduling and you don't have network support, network card support for that, you can use the disk Q disk. And it's training well. We start working on this, as I said, on, I think, end of last November. And it's currently, I just sent out, like, last week, last Friday, I sent out the IRC version three. And I think we're almost ready for a final patch set here. So the interface is training well. I just got one request for change. And I'll be working on that starting next week. So this Q disk, it can, oh, and again, we're providing support for the Intel I210 controller. This Q disk, the way it works is it can hold packets inside its buffer until the transmission time of packets minus a configurable delta factor. So if you look at the parameters here, on this case, I'm installing the TBS Q disk, now on the traffic class 0. And I'm configuring a delta parameter of 150 microseconds. So if your first packet that gets there is supposed to be transmitted in two seconds from now, then the Q disk is going to hold that packet until two seconds minus 150 microseconds. And then it's going to dequeue the packet into the net device, right? DMA timing? No. OK, I'll explain now, I hope. So because it's time centric, we need a per packet timestamp, right? So somehow, and that's what you use the other interfaces for. So the application must provide a per packet timestamp. And then the Q disk must know what is the reference from that timestamp. So that's why we have a clock ID parameter here. So you can configure the Q disk for that traffic class with a given clock domain, right? Both clock domains from the packets and the Q disk, they must be the same. Otherwise, we drop the packets inside the Q disk. The other thing that this Q disk does, and I think it's really cool, is it sort packets based on the transmission time. So imagine you have 10 applications on the user space, and they're all sending traffic on the same traffic class, and they have different periods. And then the network card, it's going to block packets. Until, so the head of the queue is going to block all the packets behind it, right? So if the packets are in out of order, then you may have a packet in the future, a packet that is supposed to be sent in two seconds from now, blocking a packet that's supposed to be sent in one second from now, if it hits the net device before, right? So for that reason, what I notice when I was talking to customers is that most people, they just do the sorting of their packets on the user space, and we decided to provide that from the Q disk directly. So the Q disk, optionally, you can dial an option to turn sorting on. And then the Q disk is going to sort the packets based on their transmission time. Meanwhile, it's holding packets inside its buffer. I'm sorry? The question is if we can offload sorting. I'm not aware of any network controllers that sort packets. So the answer is no, but only because of the lack of support at the moment. Yes. So are you asking, so are you saying that the hardware might do that automatically? Yes. You can't. You could, but so we got a very, so David Miller, the maintainer of the network stack, his opinion on this sorting thing is, and I hope I'm not quoting him wrongly here, is that once packets got into the net device queue, so once they become descriptors, that's it. There is no sorting. So he doesn't want to be dealing with that on the Linux network stack, at least. So short answer, if a hardware is capable of performing sorting, then yes, there is nothing preventing us to just enable offloading to the hardware. But I'm not aware of any hardware doing that at the moment. So we're providing that from the Q disk, basically. So the Q disk doesn't work by itself. As I said, it needs a per packet timestamp. So for that reason, we are creating other interfaces for the sockets, right? So as I said, of course, the end station is to transmit traffic, and this is Linux. So we use a socket interface for transmitting packets. That's obvious. And as part of the TBS work, we are adding this new socket option, the txtime socket option. So the socket option basically enables the feature, right? And then it lets the kernel know that for that socket, there are valid timestamps on the packets that need to be copied into the socket buffers. In addition to that, so the clock ID that I mentioned, the per packet clock ID is actually a socket option argument. But in addition to that, we are also adding a C message-based interface. So you can set the transmission time. But also, there is a flag currently called drop-if-late. And remember that I had that question on the traffic shaper slide. So what is transmission time? Is it no later than or not earlier than, right? And I try to, because TBS is not a standard shaper, it's not defined on an i3.8 document. It's basically common sense. No one has made a decision upon that. And I've heard from different customers that depending on the system, they might want it to be a strict deadline or not, or a soft deadline. So for that, we created this flag. So this flag is going to tell the Q disk if it should drop a packet that has delayed meanwhile it was buffered or not. Yes. And then the other thing, like now we are capable of exposing the Qs, and we can install the traffic shapers, et cetera. But we still need to find a way to classify traffic, right? So traffic must be able to go from the application to the right transmission Qs. And for that, because we created a mapping with MQ prior, the kernel already has a mechanism for that. So we're just using the socket option priority. And what it does, it flags all packets from a socket option with a priority. And then because MQ prior created the mapping for us, then the network stack is just going to take all the traffic from that socket and steer it to the right transmission Q. So our job's done. The socket option SO priority, that's our preferred method. But there are other ways for you to do that. You can use IP tables or NetPrior C group as well. And a caveat here is that when you use that, that's the priority that end up being used on the PCP field of the villain tag of the framers. So in the end, this also has the ability to tag all traffic for every class on the field that is used by the network to identify traffic as well. So it does everything for us. Any questions here? So what happens if an application? So the question was, what happens when an application tries to send, let's say, out-of-order packets, basically? I think I'm getting the problem. Yeah, that is a problem. And what we try to do, so what the Q disk does is besides dropping the packets that got late inside it. I mean, depending on which value set on the flag, right? The other thing it does is it always keeps track of the last timestamp that was dequeued into the NetDevice. So if a packet in the past, right, from the perspective of the Q disk, is now to be dequeued, it's also going to be dropped. But these are the only mechanisms we have, unfortunately. There is some coordination needed on the user space as well. Unfortunately, there is no mechanism for that at the moment. Yeah. So he was asking if there is a way to debug which application is trying to starve the others. And the answer is, at the moment, no. But we can talk more later about these ideas. CBQ? Yes. I mean, in my view, TSN systems, they require support from hardware, always. But on Linux, we have to make everyone happy, right? So I can create an interface that is only allowing people of certain hardware to work. So that's why we have to provide the software best effort always, as well. Yes. Yes, you can. Yes, you're right. So just some results here. You can always do time-based transmission using software. You can do that. I mean, your application can call, I don't know, clock and then sleep. And let's say you want to transmit, on my test here, I'm transmitting 322 bytes, right? So all headers included, and my payload. And I'm transmitting packets every one millisecond, right? And then I measured on the receiver side. I captured the packets. And because I always started in a roundup time in the future, let's say in two seconds from now, I measured the offset within the period of the arrival of packets, OK? I didn't create this test. This test was devised by Richard Cochran when he worked on the first version of TBS. And now we are using this as our baseline, OK? So if you use software, like pure software for that, like my machine, I'm not using a pre-empt RT. So numbers probably can get better. And as you can see here, the minimum offset that I got was 482 nanoseconds, which is pretty good. But the problem is this is TSN, right? So I don't care about average. I care about the worst case. And in the worst case, it's almost a millisecond. So I almost miss an entire deadline when I was trying to do that all by software, right? And this is a rather good machine. It's a Kaby Lake. And running cyclic test gives me an average of 50 microseconds of maximum latency. So still, that's problematic, I think, for most applications. But then if you're using TBS, what I have here is this TBS running on the software fallback, right? Again, I'm sending packets every one millisecond. And you can get very good minimum numbers. It's a better average standard deviation. It's better as well. But still, the problem is the maximum, right? And then if you look peak to peak, which is cheater, that's super bad. And now this is TBS running on hardware, OK? With the hardware offload and performing sorting and everything. So everything on. And this is with a transmission period of one millisecond. And this is with a transmission period of 200 microseconds, which is quite tight. But now here, what you can see is my maximum offset from the expected arrival time was 506 nanoseconds, which is great. Like, my cheater is of only 80 nanoseconds. Yeah, that's what the QDisk does that for you. No, no, so yes. So you need to configure the QDisk correctly, basically. For each system, you may need a delta factor that is slightly different, right? Yeah, so in this case, I was using 130 microseconds of delta. So the QDisk would hold packets until 100 something microseconds. And then we use a high-resolution timer inside the QDisk. So it's good. And you can see here, these are very good numbers, in my opinion. OK, I really need to move forward. I still have a few more slides. Sorry, but we talk. And then I've been talking about kernel interfaces. It's the focus of this talk, but let's just talk briefly about the user space as well, right? As I said, remember the demo that I mentioned from Eric Mann called OpenAVB? This became a big project called Open Avenue, right? And it has a bunch of components that are quite useful if you're developing TSN systems and applications. And unfortunately, there's not a lot of companies and people contributing to this project, but it's a very nice project. So I invite you all to become part of it. Very recently, we have contributed one of my colleagues there from Intel, he contributed a new AVTP library to this project. So please have a look. And for the time synchronization, which I haven't talked at all on this talk, we basically have been using Linux PDP. So we use PDP for L to keep the network cards control clocks synchronized. And then we use PHC2C to synchronize the controller clock to my system clock. So that's how we've been using this on our architecture. Last and smaller section as well. I just want to talk a little bit about what I think it's coming ahead for us. So we talked about QAV and we talked about TBS. These are the most used shapers at the moment. But QBV is a big thing already and QBU as well. And it's super important that we somehow provide support for these shapers on Linux. Before, last year, when we created the CBS QDisk, we actually shared some ideas. We actually implemented a prototype of a new QDisk called TA-Pryo. That stands for Time Aware Priority. It's basically a Time Aware version of MQPryo. So you can configure a full schedule per port. But we got a pushback last year because there were no controllers that were QBV compliant back then, network card controllers, I mean. So the maintainers didn't see the point of us trying to come up with that interface at the moment. But I just recently, I was told that there are a few vendors who are already shipping controllers that are QBV and QBU compliant. So we may need to revisit this soon. There is a caveat here that TBS, the TBS QDisk, in theory, it can be used to implement the schedule on the end stations. The problem is the TBS QDisk works per queue. So we need an extra piece of software for providing a scheduler that could convert the whole per port transmission schedule into a per queue, actually per stream schedule. So TBS can be used correctly. So there is more work to be done if we want to use TBS for QBV, basically. It requires another piece of software. In my opinion, we should try to revisit the TAPRIO QDisk, maybe. Or if we think that providing the software fallback is gonna be a problem for this, then we may need to just create an interface based on ETH2 or AP routes somehow. But that will be needed. So this is something that we have to work on with the Linux kernel, okay? And then I talked a lot about configuration interfaces, right? But then for the data path, as I said, we just use a socket interface. The thing is the Linux network stack is very good for throughput, right? I mean, it's designed for that. It's designed for data centers. But TSN, especially for the industrial control and the control systems, TSN is gonna require not only bound and latency, but bound and low latency. And I don't think the Linux network stack is quite ready for that, at least. It does a very good job, but when everything comes to making systems more deterministic then. So there are a few projects looking into that already. So the Express Data Path, and there is a new socket family coming along that's gonna provide zero copy for socket buffers. So I'm very enthusiastic about this work. I'm not working on this myself, but I wanted to mention them here today, okay? And then let's just start to wrap up, right? So we talked about TSN here today and how it provides bound latency on internet-based lands, right? We are starting to develop software interface for TSN, and these are becoming available upstream. Now, starting with CBS and TBSQ disks. There's, we're gonna need some future work for some other traffic shapers, so name like QBV and QBU, and I think that providing a low-latency data path is gonna be the biggest challenge that we have ahead on the Linux network stack. Yeah, there is also a lot of user space building blocks, just starting to gain traction, and Open Avenue is where we are centralizing those, so. And I didn't talk at all about Zephyr because this is a Linux talk, but we're already working with the Zephyr team at OTC, the open source technology center in Intel, and Zephyr will be providing TSN interfaces very soon, okay? So it's happening. Just a call to action here. If you guys work on companies that ship products with TSN controllers, and you have upstream drivers, please enable support for CBS and TBS when it gets merged. If you have use cases, please come along. We're doing this work all in the open on the net-dev mailing list, or just talk to me after this talk as well. I'm a platform enabler. I'm not a product developer, so the more I hear about your use cases, the more I learn, and the better I can make the upstream interfaces, okay? I'm not doing this for myself. I'm doing this for people like you, I think so. Yeah, yes, if you can start testing our code, then please help us with bug fixes and contributing code, okay? On the slides, I've put a few references along the slides, and then just a bunch of more references here as well in the end, and I think that's it. So questions? One there, okay. I didn't hear you at all, sorry. If Zephyr OS can run on a hypervisor, are you, I want to say yes, but it's been almost two years since I stopped working on Zephyr, so I'm gonna say let's talk about this with the Zephyr team after this talk, okay? I'll introduce you to the right people to answer that if you don't mind. It does? Perfect, it does. If the network stack on Zephyr is running on a virtualized network, how is that gonna work? Yeah, that's a problem. That's one of the problems we're looking at in Intel now, but I can't talk more about it just yet, okay? Any other questions? If there are any, so the question is, is there any firewalls or IP table heading overhead to the TSN path? Anything that is on the data path for packets and it's gonna add latency, so the less the better. Anything, anything, any extra piece of work you do there matters? No, if the question was, does this require real-time Linux kernel? No, it does not. The tests that I just show here, I'm not running pre-empt RT whatsoever. Not sure I got the question, sorry. So once packets got, sorry, once packets got to NetDev, what? Yes? No, once the packets get, so usually what drivers do is once they get the packets they just create a descriptor and DMA them to the network card. No more questions? Okay, thank you. Thank you very much. Thank you.