 Welcome to my talk about time-sensitive networking TSN with Mainline Embedded Linux. A quick introduction to myself. I joined Toradex in 2011. I spearheaded there the Embedded Linux adoption. I introduced the upstream-first policy. And at times I've been a top-10 U-boot contributor and top-10 Linux kernel arms-off contributor. We have an industrial embedded Linux platform called Torizen, and it's fully based on Mainline technology. It uses Mainline U-boot with Distribute, KMS, Deer and Graphics, with AdNavi for Nuvo, and over-the-air update with OS3. And for the application framework, you can use Docker or Podman. What are we talking about today? I'm going to introduce TSN. We're going to have a look at the Traffic Control TC sub-system. We look at the credit-based shapers, CVS, earliest transmit time-first, ETF, and also the time-aware priority scheduler, Taprio. Then we have a look at the Mainline TSN ecosystem with the Linux Precision Time Protocol, PTP, and also the Alson G-Streamer Audio-Video Transport Protocol, AVTP plugins. And I'm going to show you also some TSN-capable embedded Linux ports. We have two flavors there. We have some with the Intel i2-10 Ethernet controller, with the IGB driver, as well as the Synopsys Designware Ethernet Quality of Service controller IP integrated into the NXB IDODemix 8M Plus SoC with the DWMAC SDM MAC driver. And at the end, I also have some hardware here, and I can show you some live demonstration. Let's get started. Time-sensitive networking TSN, it was formerly known as audio-video bridging, AVB, IEEE basically expanded its scope and then rebranded it as TSN. It's basically a set of standards enabling time-sensitive audio-video applications on local area networks. And so it has, for example, time synchronization or also bounded transmission latency. Then of course the resource management in a network, the availability and the reservation of bandwidth. And of course the whole application interoperability. So this table shows you the different standards available. So there is the IEEE 8021AS, it's about timing and synchronization. Then the 8021QAV is about forwarding and queuing. And that is called forwarding and queuing for time-sensitive streams, FQTSS. Then there is the 8021QAT, which is about path control and reservation. With, for example, the stream reservation protocol, SRP. Then there is 802.1BA, the bridging part. This is the audio-video bridging AVB systems. Then we have IEEE 1722AVTP, this is for the AV audio-video transport. It's basically a layer 2 transport protocol for time-sensitive applications. And then in the 1722 there is also AVDECC, this is about device management and control. So device discovery, enumeration, connection management and control protocol. Then the 8021QBU and .3BR is about forwarding and queuing with the frame preemption. Then the 8021QBV forwarding and queuing with the enhancement for scheduled traffic. Then the 8021QCA with path control and reservation. The .1QCC about the central configuration methods. And the .1QCI about time-based ingress policy. And the .1CB about seamless redundancy. So basically frame replication and elimination for reliability purpose. This is just a summarization of all the standards available for TSN. So how does it integrate in Linux? In Linux we have the traffic control TC subsystem. That is basically the control plane for the whole TSN. So it allows managing and manipulating the transmission of packets of network traffic. So you can policy, classify, shaping and also scheduling traffic. It allows mangling packets during classification by using filters and actions. And then you can use queuing disciplines, so-called queue disks, to queue up and later schedule your traffic. And one can queue requested packets so they can be queued for later transmission. And DQ is basically a request that a queued up packet can be chosen then for immediate transmission. And it implements the forwarding and queuing enhancements for time-sensitive streams, the so-called FQTSS that we saw before in the table. And the actual utility in Linux to do that stuff is the TC utility, usually in SBIN TC. So let's look at one of the queue disks. For example the multi queue priority MQ-PRIO queue disk. This is a very simple queuing discipline. It basically just maps traffic flows to hardware queues. So if you have hardware that has multiple queues, it makes sense to just map them basically one to one to touch flows. It exposes the hardware transmission queues. And it basically defines how Linux network priorities then map to those hardware traffic classes. So basically a mapping from traffic classes to the hardware queues. And this can then be used by later queue disk operating on per queue base. So like for example the CBS or ETF that we will see later. So if we look at the credit-based shape, CBS, it basically allows credit-based fair queuing. So it's basically a computationally efficient way how to do a fair queuing of network traffic. The way that works is that you have a certain credit and you have a so-called idle slope. That is if nothing is transmitted in your queue, then you get a certain credit with a certain slope. That's the idle slope and when your queue is sending, then it's using up the send slope. So you basically have to pay for when you transmit traffic and your credit will decrease by this send slope. And then of course it also defines a high and a low credit. So the high credit is the maximum credit that you gain when you continue not to transmit anything. The idle slope will give you more credit but there is also a maximum. And the low credit is the same just on the, if you keep sending, you end up with the lowest credit that you will have. And if you don't have enough, then you can almost end. And that is basically as per IEEE 802 1Q 2018, which was formerly known as the .1Q AV. And in the graphics you can see that we basically have three AVB packets queued. And then we start sending and you basically see a fourth packet coming in. And if you now have interfering traffic that is shown on the lower part, basically as soon as your credit is positive, the AVB packet can be launched as soon as the interfering traffic is finished. And then when the credit gets negative, basically the third packet cannot be transmitted immediately but it's basically held and can then only be transmitted again. We see that towards the middle when you have enough credit again, then that can be sent. That is basically how this credit-based shaper works. Then another Q-disk is the earliest transmit time first, ETF. It's not an FQ TSS feature per se, but certain Ethernet controllers like for example the I210, but not the DWMAC, they provide some launch time feature. So basically in hardware you can queue stuff and give it also a launch time. And that Q-disk basically enables frames to be transmitted at specific time and make use of such hardware feature. And the way that is done, it basically maps the SOTX time socket option, which allows the application to specify at the specific time you want to configure the transmission for each frame. And the ETF Q-disk ensures that frames coming from multiple sockets are then sent to the hardware exactly ordered by this transmission time. Then the enhancements for scheduled traffic, EST, this allows basically that from each queue that it can be scheduled relative to a known time scale. So you can look at it like a transmission gate that is associated with each queue. And basically when the states of that transmission gate can be open or closed, and that determines then whether the actual frames can be selected for transmission or not. So each port associated with a gate control list, GCL, it is an ordered list of those gate operations. And that is as per I3802-1Q-2018, formerly 1QBV. And it allows the system to be configured and to participate in complex networks, basically similar to what is envisioned by the I3802-1QCC-2018. And usually the way that can be done is this GCL schedule. Of course somebody has to come up with a GCL schedule. That usually requires a central entity which has the full knowledge of all nodes and traffic produced by those nodes and their requirements. And then it can basically produce a schedule for the whole network. And this is mainly used for primary use cases for industrial. It's similar to other field passes where you basically have the full knowledge of, you know, what your network looks like. Then there is a time-aware priority shaper, top-rear queue disks. It's similar to the MQPRI-IO, but it defines how Linux network priorities map into traffic classes. And this mapping maps those classes to hardware queues. And that enables configuring GCL for a given interface. That again is I3802-1Q-2018. If we now look at the Linux ecosystem, so what exists there? So for the Linux kernel networking subsystem, we just saw there is the TC traffic control and then of course all the queue disks with a view that I previously discussed. And then on top of that you also have the Linux PTP project that is about time synchronization using the generalized precision time protocol, GPTP. So this is usually used as a base infrastructure that all your nodes in the network actually talk about the same time. Because otherwise it would be difficult to schedule stuff at a specific time if they don't even have the same time reference. And on top of that you can then use, for example, the A-V-T-P project which is about audio-video transport protocol, A-V-T-P. It's an implementation of this protocol standard. And usually you don't want to do that directly in your application but use some other frameworks which for audio that usually also. And for video that or also for audio that might be also G streamer. So audio-video transport protocol, A-V-T-P, they have then plugins there so you can fairly easily form an application layer, make use of this. Which basically underneath also uses live A-V-T-P again. So also for audio there is basically the so called A-A-F, the also A-V-T-P audio format plugin. This is advanced Linux sound architecture also low level framework providing the audio functionality under Linux. And if you now there want to use A-V-T-P you can use this A-A-F which is basically a regular PCM plugin. And it uses audio-video transport protocol A-V-T-P and allows transmitting receiving audio samples through a TSN capable network. And that allows you to easily implement A-V-T-P talker or listener functionalities. But again also here you require a GPTP. So for the A-V-T-P talkers and listeners to actually share the same time reference. And usually that's called the presentation time from A-V-T-P so it informs when certain PCM samples are actually presented to the application layer. And the FQTSS that allows you to provide bandwidth reservation and traffic prioritization for such an A-V-T-P system. Then there are also the G-Streamer A-V-T-P plugins. So G-Streamer is basically a higher level framework which provides multimedia functionality. It allows encoding, multiplexing, filtering and rendering to applications. And this particular plugin is part of the GSD plugins bad collection. And it uses audio-video transport protocol A-V-T-P to handle such A-V-T-P packetization. And on top of that it implements typical talker and listener functionalities basically out of the box. And it can, of course, be leveraged by any G-Streamer-based application in order to then implement such TSN audio-video application use cases. And in detail the actual plugins, they are called A-V-T-P-A-A-F. And I have it here in parentheses. There's a depay to basically get the payload out of packets again or to put it into the packet. So extracting or payload encoding raw audio from respectively into such A-A-F A-V-T-P-P-D-Us. That is as per IEEE 1722. There is also the A-V-T-P-C-V-F that does the same with a compressed video. And then it also has a directly syncs available, the A-V-T-P sync, which basically sends A-V-T-P-P-D-Us over the network. And it has also an A-V-T-P source, which basically receives such A-V-T-P-P-D-Us from the network. And whoever played with G-Streamer knows how this source and sync stuff work. So basically, as a source, you receive multimedia data and then you can pipe it through further G-Streamer operations. So for example, you would get those multimedia traffic from the network and then you can pipe it through further encoders and another sync, which might be a display or something like that. And on the other side, if you have, for example, a camera available on your system, you can do some processing and then basically pipe it to a sync which then sends that traffic off to the network. Now let's have a look at some TSN-capable embedded Linux boards. Like I initially said, we have some with the Intel i2-10 Ethernet controller on board. That is these two modules shown on the top right. There is... Yeah, this is actually a predominant PCI-Express TSN network interface controller. It uses the IGB driver. And it's used for the on-module Gigabit Designate controller on the Toradex Apalis D30 as well as Apalis TK1 modules. It's basically a PCI-Express network controller, basically a Mac and a file in one chip. The chip shown with the red arrow there. Then another hardware which is TSN-capable is using the Synopsys design where it is a quality of service controller IP. And that is, for example, integrated into the NXBI.MX8M plus SOC. This SOC actually has two Macs integrated. One is a good old FEC, so the fast Ethernet controller. Well, nowadays it can also do Gigabit, but it's just basically a continuation of that older Motorola IP node. But as a second network interface, it also integrates this Designware IP which has real TSN capability. As for the driver in Linux that is using the DW Mac, STM Mac driver, and it is used for the on-module Gigabit Ethernet PHY. So basically what the arrows show, of course the Mac is integrated into the main SOC, but we also have the Ethernet PHY for this second Mac basically integrated on the module. In our case we usually use micro-Phys, which is nowadays a microchip. Okay, now if you would want to actually make use of this, how would we go about that? So if you want to build an open embedded Yocto project image which starts with such a TSN functionality, you can basically start with our regular TDX reference multimedia image, and we can extend that. You have to use the master branch because the Dunfell branch is too old, basically the Gstreamer version because the AVTP support requires at least version 1.18 or later. So you have to add the following additions to your conf local conf. You basically can append your image install with Gstreamer LibAV, Gstreamer plug-ins bad AVTP, and also you can add the plug-ins ugly, it gives you some further plug-ins. Within the ugly ones is, for example, the ASF and also the Meta and for encoding X2, H264. Then some tools that you might want is of course IP route 2 which has a separate package of IP route 2TC for the traffic control utility. And then on the LibR sound side, there is the PCMAAF and then of course LibAVTP which some of these plug-ins underneath use. And there is also for the NXP stuff, there is one that is called package group FSL Gstreamer commercial and another useful thing if you want to look at actual traffic might be TCP dump. Then for this NXP specific Gstreamer stuff you need to add the license flag accepted commercial and then on the package config side you also need to make sure that the plug-ins bad include the AVTP stuff and the ugly include all this, you know, required for the H264 and also plug-in you have to add the AAF explicitly. And then here I also show a custom recipe to actually build the plug-ins bad with such a configuration, I mean well upstreamed all this so if you use regular open embedded master stuff this is now all upstream you don't need any special custom recipe any longer, ok? But I'm still showing it here basically, yeah, it's basically the provider of this AVTP that's this provides line. So basically as part of the plug-ins bad it then also includes AVTP functionality. Then you also have to extend this plug-ins bad with this AVTP package config again that is meanwhile I upstreamed all of this so the regular open embedded core plug-in bad recipe nowadays has this already built in. So you can just, you know, use that package config like I showed here it knows what the AVTP is. But that is how such package config in the recipe is actually done it's through the configure stuff you have to define what that exactly means when it builds it. Then on the links kernel configuration side make sure you enable all the, you know, required network scheduling stuff like we discussed before of course you need the base Netscape functionality as well as then the Netscape of course the MultiQ, CBS, ETF, Taprio, MQPrio, whatever other kind of such functionality you want to use later. Then in the case of the DW Mac you also have to make sure that your queue configuration so those hardware queues that we talked about earlier that you make those available. You usually do that via device tree for example in our case that is the location Archarm64 boot DTS free scale IMX-8M plus Word in DTSI and there you basically have an RX node and similar also a TX node and there you actually configure all these queues you basically say how much queues you have available so this hardware usually gives you five queues and then you can configure those queues and the actual priority for it so more or less we just map them straight through so queue one, priority zero, queue two, priority one and so forth and you can do the same for your TX nodes and that just like discussed earlier does a one-to-one mapping from Linux traffic priorities to do these hardware queues. Then of course there is also a certain system setup required to get you started in a TZN system and basically first we have to map the Linux internal packet priority, the so-called SO priority to a certain VLAN header PCP field and the way you do that is by the basically IP utility you say IP link add, link ETH1 so this is basically the ethernet that is on for example on the DWMAC and you name it ETH1.5 so to the .5 remember that's the VLAN so we use VLAN 5 so also we say that explicitly there type VLAN ID5 and then we can put an egress QS mapping and we also just map two to two and three to three and then we bring that link up like that so that is basically the top level configuration that we have these hardware queues configured straight through like that then of course we need to set up the time synchronization usually in a TZN system you need some PTP hardware clock so called PHC that is synced between the PTP masters and slaves and it means that the RMS offset between a PHC and the GM the ground master clock is usually smaller than 100 nanosecond and the PHC and the system clock so the so-called clock real time they also need to be synced and then that means you have a system clock offset smaller than 100 nanosecond and the PHC time is usually set in a so-called TAI coordinated time while of course the system clock you might have usually in UTC and so you need to configure the so-called UTC to TAI offset for your system clock that is the clock real time versus the clock TIA the way that can be done is for example with the PTP for Linux project and the binary PTP for L and you say okay minus I interface EDH1 and then that you use some configuration minus F the GPTB CFG file and then you can say which is to a step threshold of one and then you start that that starts the PTP even basically the background and then with the PMC utility you then actually set the whole grandmaster configuration that is given here I'm not going to go in full detail here that's just a that's all this configuration like we discussed and then with the PhD to this you also make sure that your system clock actually uses this configuration and usually when you have done that on multiple systems you can use some script like this check clocks when you Google for that I also have some references at the end you find that that will just basically check and make sure that you have them all synced and then the next step is that you basically configure your network scheduling so for example with the MQ Pre-OQ disk you make sure that the INET QOS Q0 that make sure you're not using Q0 because the Q0 is not supporting hardware CBS so we just in this example I avoid the Q0 for any kind of AVP processing but we can set the socket SO priority 2 to the traffic class 2 and the 3 to traffic class 1 and all the other socket priorities we can leave them on the Q0 which anyway doesn't support any such priority stuff and you do that with the TC utility that's basically the command that configures it like that and then for the CBS Q disk we can use Q1 for video and Q2 for audio and that basically the next two commands configure that and then the P50 Q disk that's basically just a fallback so by default the Q0 which doesn't support the TSN basically just uses regular P50 so all the rest of your traffic that you might have on your system can just go through that Q this shows the ASA AF audio demo so you have to set up the AF0 and the Convert to Zero plugin you can do that at at CRSAN.com I've shown it on the right side there so you basically create the PCM AF0 and the PCM Convert to Zero with that configuration and you basically define the Ethernet interface which is AFRanson and then socket priority and stream ID you also need to convert it from big to little endian and then you can use a talker like for example speaker test who guys have played with also before are probably familiar with that you can just use speaker test and then use device AF0 and on the listener side you can use a record with a play like that and if you want to check which Q-disk are actually in use when you do such samples you can use the TC-S Q-disk that will list you that and in the G streamer for the audio demo I've given here also the commands how you can do the talker and the listener part and then of course more interesting is the audio demo so you can do the demo actually shows two clocks on that you see there are two times given and on the left side you have basically the timestamp that gets added to record before it gets encoded on the talker side and the time on the right side is basically added at the very end at presentation time where it actually shows the video and that way you can now really this timestamp should of course be in sync after takes might take a couple seconds to just sync it all up when you start it all up but then it actually shows you the real time offset and also here you can check the queuing with the TC-S Q-disk very good I have here such as this basically running just for anybody I can reboot it again this is for example such a I'm excited plus system that makes use of this kind of hardware and as the DW Mac basically configured like that here on the instrument unfortunately I'm running out of time so I cannot really show you much more here but that's just on this system the this Ethernet 0 that is using this okay any questions so far a little bit too much details so you probably have to digest it first also on my slides you actually find the references where you can read up more on this topic and if you have any further questions you are welcome to also contact me directly via Toradex support thank you very much