 Hello, it's Timothy Brooks. Welcome to talk about TC. Don't worry, I will tell you what it is. My name is Yiri Benz, and I'm working on different things, networking at Red Hat, mostly kernel related. So the TC tool. Okay, so let me ask first, who knows or who at least heard about TC? Raise your hand. That's good. That's better than expected, actually. So the TC tool is a magical thing that nobody understands. I hope to change that a bit by this talk. It stands for Traffic Control, and allows exactly that to control traffic, where it flows and how. It's part of IP Road 2 package. So you probably have this already installed on your computer. And traditionally, it's used for QoS quality of service setup. So you can do shaping of traffic, different kinds, different speed limitations and so on. So I won't talk about this. I will completely omit this aspect. I will just talk about other things that can be done with TC. But first, we have to introduce some terminology. You may be familiar with that, or probably have not. So the first and most important thing is QDisk. This is short for queuing discipline. What's that? So to explain that, I need to introduce two other terms, and that's ingress and egress. Those are typical traditional networking terms. Ingress is traffic that's going to an interface from the network. So we have network right here, and the packets that are flowing to the computer, to network interface, that's ingress traffic. Egress, that's the other way. That's from the computer, from the network interface to the network. Now, we have two points where we can attach something. We can hook something. The first point is right here, and that's an ingress. That's where all the ingress traffic goes to. And the second point is right here. That's where we can catch or attach to traffic that goes out of the computer. So the first one. Traffic that flows to the computer goes to something that's called ingress QDisk. So every packet that goes in is anchored into an ingress QDisk, and then it's passed down to an application. Somehow, let's not go into details. So this is ingress QDisk. So again, that's something that's sitting between the, or at this entry point at the internet work interface and gets traffic, gets packets, and then passes them down to the application. Obviously, ingress QDisk works the same, but the other direction. So applications, data, the data or the packets at this point are anchored to an ingress QDisk, and then they are sent out to the network interface card to be sent out to the wire to the network. So these are two important things, ingress QDisk and egress QDisk. Those are independent and two different points in the packet processing. Now, what is the QDisk? QDisk is, for our purposes, it's something. It's just some black box. We just put packets there and then they go out. So let's imagine it's a queue. For example, we just stick packet there and they are then dequeued and processed further or passed further, in this case, to the interface card. There are different kinds, different types of QDisks. They are differently like those. Usually it's like three or four letter acronyms, so it's like HTB, FQ, Codel or something like that. It's not important at all for us right now because we're not doing QoS. Both is important. All those different QDisks are of two different types. The first one is called classless QDisk. So QDisk can be classless. The classless QDisk, that works exactly as I described. So packets go into that QDisk and they are sent out to the network, to the network card, which sends them out. Now, classful QDisk is a bit different beast. It's a bit more complicated, not that much though. So the important thing, it's classful. That means it has classes. So traffic or packets that enter this QDisk are put into one of the classes, which are attached or part of this QDisk. To what class? That depends on the QDisk. So different QDisks do that differently. They just do some magic and put the packet or select the class to put the packet to. Depending on criteria that are usually configurable but not of much interest to us. What is important is that to each class there is a QDisk attached again. That QDisk can be classless, as we have in the example here, or it can be classful. So it can have more classes and so on. So in the end, we have something that is a tree. The first QDisk and then it continues to branches and branches and branches. And eventually, when the last QDisk it's going to the actual network interface. So we can select the QDisk. We can choose, we can configure different QDisk at different classes or the root QDisk, as we want to. So let's attach a filter. We can attach a filter to a QDisk or more filters. We can attach more filters to one QDisk than they are evaluated in a row. So filter is something that looks at the packet and it has some conditions and it decides whether the packet matches the conditions or not. And that's it. So the obvious question is, what happens? So the QDisk, sorry, the filter decides that the packet matches the filter, so what happens? So before I answer that, let's look at this again. So packet is passed to QDisk and instead of what we said before, the packet being sent to a class or to interface, instead of that, it's now passed to the filter because we configure the filter. So it's not, every packet now is going here. What's happened now? So the most obvious thing is, of course, the filter selects the class where the packet should go to. So now it's not the QDisk who decides which class to use, it's the filter. If the packet doesn't match the filter, then next filter is consulted and so on. And if no filter matches, then, of course, then that's the QDisk who decides. So yeah, but still not much interesting to us because we have the same as before, but now we have a filter who decides instead of something in QDisk. So what is more interesting to us in this talk is another thing that can be done with the result of the filtering, and that's action. So filter, when the packet matches the filter condition, we can execute a particular action. Now, what the actions can be? We can do broad range of things. We can drop the packet, but that's not that interesting. We can modify it. So change the data in the packet. We can redirect it to a different interface. So we decide, okay, we want to steal it from this interface and put it to another one to be processed by different interface. Yeah, we can do that. We can tunnel the packet, encapsulate it into headers, and then send it out, encapsulate it, and so on. There are even more options available. Let's look at how this look in practice, what the command line arguments and options are. First, we're probably interested in what's going right now in our machine. So let's pick one single interface, eth0, and look what are the current QDISCs configured there. So the TC. TC is the tool we're using. QDISCs mean we're interested in QDISCs, and show, obviously, show us the QDISCs that are attached to device eth0. And this is the answer. This is the answer. We have QDISCs named PTFOFAST attached to eth0. And some more stuff, more than that later or never, because most of that is just specific for this particular QDISC. So important thing, this is the name of the QDISC. Let's replace it with a different one. We don't like PTFOFAST for whatever reason. We want to add a new one. One thing to know, the PTFOFAST was a default QDISC. We have not configured any QDISC yet on eth0. So the PTFOFAST is a default one. That means that for our purposes, it's the same as there was no QDISC at all. This is the reason why we're using command add, with adding new QDISC because there is none so far. If there is none, cannot just pretend there is a default one. So right now we have no QDISC and we're adding a new one. So we're adding QDISC to device eth0 and root. This parameter specifies where this QDISC should be attached. Root, this is the root. This is the main, the bottom of the tree that we saw. So we are adding QDISC to the root of the tree. So the first one that will be consulted when the packet is sent out of the machine. And we give this QDISC an identifier handle 100 colon. The handle is always two numbers separated by colon. If there is no number, it's zero. So it's really 100 colon zero. It can be anything, just invent a number. It's not mandatory. We could omit the handle completely. Then colonel will pick up one handle itself, more or less random one, and we would have to look it up. We will need the handle later to attach a filter. So if we did not specify handle, colonel would pick up one and we would have to look it up and use it. Pryo, that's the name of the QDISC we're going to attach. Now, Pryo is a classful QDISC. So we can attach or it comes with three classes already when we edit. The reason why we are replacing pf4fast with Pryo is that we need a classful QDISC because filters do not work on classless QDISCs. Do not work means you can attach them, all right? No error. It will succeed. It just won't work. Similarly, let's attach an ingress QDISC. As you see, the command line is much simple. For example, we are not specifying any handle. That's because for ingress QDISC, the handle is always ffff colon. You cannot even change that. Now, ingress is a special kind of QDISC that can be attached only to ingress. At the same time to ingress, you cannot attach any other QDISC than this one. And it's classless, but filters do work with ingress. So this is really special. What it looks now like, you now see this is our Pryo QDISC, this is the handle, and ingress QDISC, this is the handle. Let's add a filter. So, TC filter at, obviously. We need to specify a device. Now, what is this stuff? This is the QDISC that we are attaching the filter to, parent 100 colon. Yes, so this is our ingress QDISC. The filter that we are attaching is called match all. This is the new filter. Match all filter, obviously, matches all packets. Wait a moment. I said we're dropping all IPv6 traffic, not all packets. Of course, the action specified last, and in this case, it's just simple drop packet. So we're dropping all packet. Match all matches everything. So why we dropping only IPv6? The reason is this part. Notice one important thing. This is specified before the filter name. That's because this is not a property of the filter itself. This is more a thing that is built-in or a property of the TC filter framework. So, each packet, when the packet enters the QDISC, then TC, and there are filters attached to QDISC, then TC looks at those filters, and before it executes the filter, it decides whether the filter should be used or not based on protocol. So this means that what is written here, is that only IPv6 protocols should be passed to the match all filter. The protocol may be omitted. In that case, all packets for all protocols would go to the filter. At least that's the theory in practice. It's a bit more difficult sometimes. So to reiterate important things, filters are per interface and per QDISC. So we can attach filter to different interfaces, have different QDISC and different filters. There are also per protocol, and as we see a bit later, they have also priorities. If we look what this looks like, we see we can list the configured filters on this device, and we see this thing. This is priority of the filter, and this was attached by the kernel automatically. We don't like this number. I do not. So let's change it. Let's delete the filter we just added. You see that we used the priority. That means delete all filters, or delete the filters with this priority, and let's add it exactly the same one, but this time we specify priority as 50, for example. If we add multiple filters to a single QDISC, those filters are evaluated in the order of your priority, with the priority zero being the highest one. What filters do we have? The basic filter is called basic. It can match on packet metadata or data. So, metadata first. What is metadata of packet? For example, packet length, or whether it's broadcast or multicast, some kind of marks and so on. For example, this filter allows us to drop all packets that are longer than 500 bytes. It's quite useless, but that's just an example. So this is TC filter add. You may notice that I don't specify the parent here. I don't say to which QDISC it should be attached. If I don't do that, it goes to the egress root QDISC. So, yeah, I'm fine here. Filter name basic, and also not specified protocol, because I want all protocols to be processed by this filter. Basic match. Those are the parameters or agreements to this filter. Match, metadata, and then packet length is greater than 500. If it matches, then drop the packet. It can do a bit more crazy stuff, like use this parameter random, which generates a random number. I mask out all bits except the first one. So I just took the lowest bit of the random number, and if it's equal to zero, then drop the packet. So this drops the packet with 50% probability. Not much interesting for drop, but we may do different stuff with such packets. So why not? Another interesting thing that I like to show is that this can interact with the world outside of TC. For example, we can use IP root, IP tables, to add a mark to a packet. Each packet carries or can carry a mark, and IP tables can set it. So I add a rule to IP tables to mark packets to a certain destination with mark one. And then in TC, I can look up the mark, and if it's equal to one, for example, zero, do something, drop the packet. So there's other stuff the basic filter can match on, like system load average and network interface, broadcast, multicast, whatever. There's a man page. I said that basic can match also on data. This is starting to get interesting, because now I could in theory match probably on some headers, like IP header and so on. The basic filter is not much useful for that. But we can, for example, look up strings. This rule is trying to look at offset 100, count it from the transport layer header, and so look whether the three bytes added of set are X, are the letters X, Y, and Z, for example. Another filter, and that is probably the filter, is called flower. Flower allows matching on protocol headers, or protocol fields in protocol headers. So for example, I can write a rule that will drop all HTTP traffic. So adding a filter, note this, protocol IP. More on that a bit later. Filter is flower, and I specify that IP protocol, so the protocol that's in IP header, must be TCP, and DST port, destination port, which indicates would be TCP destination port, is 80 if it is, so then the packet is dropped. Of course, this covers only IPv4 traffic, so if I wanted to drop also IPv6 HTTP traffic, I would have to add another filter that would match on IPv6. So this is easy. This is really easy to set up and to configure. Flower can match on most header fields, like mega addresses, VLAN tech, whatever, IP addresses, ports, even some high protocols, and new stuff is constantly being added, so it's quite likely that in another month there will be more fields that flower will be able to match on. The IP addresses, mega addresses can be masked, so we can just match on a part of the address or prefix of the address. And for efficiency, and this is quite interesting, with flower we can add more filters with the same priority. What would happen? For example, the most interesting stuff would be like we're adding, we try to match on IP address, and based on the IP address we want to do different actions. So we add all those filters with the same priority, flower will put them into a hash table, and the IP address will be matched only once. If we add filters with different priorities, then those filters will be tried in order, and the more IP addresses you would have, the more time it would take to process the filters. U32, another example of a filter, it's called universal 32-bit filter, it got a nickname, ugly 32-bit filter. It can do similar stuff to flower on the first thing, at least it would seem so on the first site. I can match on IP address, source address. The problem is that U32 really translates internally to offset, so this looks at the particular offset in the packet, and that's it. It does not understand protocols. So if you match on a TCP port, and the packet is not TCP, well, it matches random value in the header, or if you have a packet that has IP options, so the IP header is longer than expected, and then even though the packet is TCP, then U32 will match just random value. It has also quite complex syntax, contains some weird hash tables that are chained, and look at the management page. It's too complex to understand for mere mortals, so use flower. We have a few different, more different filters. I will just skip those. The slides will be available. You can look those up, like root, BPF, whatever. Actions, that's more interesting. The most, or the simplest action is JECT. That's like action that does nothing. That sounds weird, right? So after we specify an action, we can add one more keyword, which instructs TC what to do next, and by next means next when the filter matches. So when the filter matches, then actions are performed, and then there is an instruction which instructs TC what to do next. So the default action for most filters is pass. That means we're finished. No more filters, no more filtering. Just put it into a class or put it out to the wire. Another action or another operation is drop, which means drop the packet, do not bother anymore. Reclassify, repeat filtering. Like again, run the filters again. This is interesting in case we change the packet in an action. So we modify the headers and we want to run the filters again, because the headers obviously changed and now different filters will match. Continues with the next filter. So try to match more. And pipe, that's interesting. We can actually have more actions for single filter. So we can specify more actions and pipe means try the next action. So that's why GACT, which I call is empty action, because it just does nothing. Action, GACT, and drop. The GACT keyword can be omitted, so action drop is enough. Police, you may know this action if you do. If you did any shaping or polishing with QoS. So police just matches packet based on rate, based on the traffic. And then you can specify those two operations here. Confor exceed, so this one is executed when the QoTanimate is exceeded and this one when it's with him QoTanimate. So yeah, let's go to the next one. Mirat, this is an action that can copy or redirect the packet to another interface. So it looks like this. Mirat is the name of the action. Then we specify either ingress or egress. That means when the packet is redirected to another interface, whether it should act like it was received by the interface or whether it should be sent out to that interface. So that's here. Redirect, we can either move it to another interface or copy the packet. So split it into two. This is interesting for port mirroring, which with good filter we can do selectively. We can just get some of the traffic, like ports to specific destination or whatever we want to and only those packets copy to an interface for mirroring, for example. Let's look at something more complex. How can we match on two filters? So the first thing to remember is the picture that was like back, I don't know, half an hour ago with the class for Qdisk, which had multiple classes. Let's add one. So we add the prior Qdisk and specify that we want four classes. Wait for this a bit of hack and I won't get into details. Let's just say we have four classes and we will use the fourth one for our purposes. So we attach new Qdisk to the fourth class and this is the second comment. So this is the handle of the Qdisk attached to the root. Each class in this Qdisk would be numbered 100 colon 1, 100 colon 2 and so on. So the fourth class is 100 colon 4 and to this class we attach or we add a Qdisk. So this is not, instead of root, here we specify parent and the class identifier. This is the handle. We give to this new Qdisk and we will use it just on the next slide. Why we need this? We need this because we want to, we will have or we will want to attach a filter to that class, to the fourth class, but we cannot attach filter to class. Filters are always attached to Qdisk. So we add Qdisk to the class and we're good. So this is what we will do. We'll add a filter to the root Qdisk. This is the 100 handle for protocol pv6, which would match on a certain IPv6 address and instead of action we will do what I showed you at the beginning. We will put this packet to this class. This means that the packet will appear now on Qdisk 101 on this one. So we put it to the class and it appears on that Qdisk. And now we attach a new filter, a different filter to that Qdisk. Wow, something, whatever. I'm checking whether the packet is multicast here. And do an action, like very directly to somewhere else. As I said, important thing to remember, filters are always attached to Qdisk. So this is a Qdisk. I cannot put 100 colon 4 here. That wouldn't work. So back to some more actions. I will just go quickly through them because those are not that interesting. You can look those up online if you want to. So pEdit can edit packet data. So for example I can edit destination port and set it to a certain number. Again, this is internally translated to a packet offset. So in fact this is doing something like this. Change data at offset something to some value. So if you want to use this, be sure that the packet is really the correct one you want to edit by of course using appropriate filter. So this is another example of pEdit just changing 2 byte value at offset 200 to this. Now, when using pEdit, pEdit does not care about checksums. It just changes the data and that's it. So if we change something, we need to fix up the checksums before we send the packet out and that's what action seesum is for. So now here you see example of the pipe operation. So we add two actions to this filter. We have filter like all TCP, we match on all TCP traffic and then the action is changed the destination port to 22. And then pipe. So I said after each action we can add one of those keywords. So we have pipe here. That means continue with the next action and then add the next action which fixes the TCP checksum. SKB edit another action that has a similar name as pEdit and it changes packet metadata. So for example this one would set the firewall mark. If you remember the example that we had the filter basic and matched on the mark set by IP tables, we can set the mark in TCP as well. We don't need IP tables for that. So that's one example of what SKB edit can. It can also set packet type. So if the packet is received from the network, then the kernel stack looks better. The packet is really designed for us or for a different computer than us and this is current metadata. It's called host or other host. So you can fix that or change that. So if you do something really crazy with packet, it can happen that a little network stack can think that the packet is not designed to local machine but should be on the forwarded somewhere so you can fix it up with this. That's really out of scope of this presentation. Or you can set hardware queue mapping. So if your hardware supports multiple queues, you can just SKB edit to specify which exact queue the packet should go to. That's really advanced stuff. Simple action that allows you to log something to kernel log. This is useful if you have a really complex set of rules, filters, actions and you completely lost it and it doesn't work. If you want to find out whether you reach a certain point in evaluation whether you reach center filter and it matched or not, use action simple. That's just sends this message to the system log. The last action I will mention is VLAN and this action can add, remove or change VLAN tags. So this example actually makes the ETH0 interface to behave as VLAN or to behave as VLAN port with tag 5. Every packet that's sent to ETH0 is tagged with VLAN or VLAN header is added with tag 5. As you see, we're attaching here filter to the egress queues matching all packets and all protocols and we're also adding to ingress queues filter that matches on this protocol that's VLAN and if it's VLAN 5, then we remove the VLAN header. Of course, in reality we probably want to do something with traffic that's not tagged or has different VLAN ID, but yeah, this is just a simple tag. This particular example is interesting because with this you can actually separate different traffic like for different hosts or for different TCP ports or whatever to separate VLANs, which you cannot do any other way. I promise to some magic. Tunneling. Tunneling is another thing you can do with TC. As I said, you can encapsulate traffic. Let's get into it quickly because I think most of you don't care about tunneling at all, which is a good thing to do. We're adding here VXLAN interface and setting up an IP address on that. This keyboard is important. This keyboard said that the tunnel should not have destination address of the tunnel traffic specified in the configuration for the interface, but rather than that, it will be specified per packet, so each packet can go to different hosts when tunneled. This can be done with TC, so let's first add our well-known Q-disk to the VXLAN 0 interface and add filters. We add egress filter matching all packets, which set the tunnel key to those values. This attaches some metadata to the packet, which specifies to the VXLAN interface where the tunneled packet should go to. On ingress, that could be similar. So we again add a filter to ingress that matches on protocol IP, for example, here. It uses flower to look at the tunnel headers because when the packet arrives to VXLAN 0 interface, the tunnel header is stripped but remembered and we can match on those fields using flower. What's that good for? I think that's a very valid question. I mentioned network monitoring. You can just separate particular traffic, send it to or mirror it to our own interface and probe network for traffic and watch only traffic that we care about. We can also use this stuff for programmable switches. We can match on any traffic and just direct to any interface. That's a switch, right? It's a programmable switch. We're matching on flows and direct it to interfaces. So we can use just TC if we're brave enough or there's project underway to make a backend for OpenVswitch that would use TC. Anything you can think of. Now, TC has some problems. Let's talk about them because they are quite serious, actually. First is error reporting. This is actually probably the only report you've ever seen. Is there anything wrong? Which is really easy to do? You will see this error message. Something is wrong. So it's really one bit error message. This sucks. We'll try to improve it hopefully in the future. Documentation. Well, some time ago it was like non-existence. I think to mostly full-sutter, it's tremendously improved and many people are working on that. So documentation is improving. There are now many pages for TC, finally. Now, all that QDs, hierarchy with trees and filters and ingress, egress, QDs, blah, blah, blah, that's quite complex. So we're missing a tool that would just show it up to you in some way that you can make sense of that. The TC QDs show out, TC filter show stuff that's just unusable for humans. And there are also some things that really make implementation of, for example, OpenVStitch back-end difficult. The filters are the per interface, per QDs, so you cannot have one filter and attribute to multiple interfaces, which would be useful. So it cannot be done now, but I think that this can be extended and I hope to add this in the future and so on. So that's basically it. It's still this thing, although it's old, it's still under development. So there are new interesting stuff coming. We have hardware offloading now for some combination of filters plus actions. This works with some networking cards, mostly for Flour and U32, and we're working on more stuff like adding cookies, like storing our values in the kernel using contract and so on and so on. So thank you, and we have like four minutes for questions. Maybe 10. Okay, so I was too fast. But yeah, questions. Yeah, so the question was what is the performance overhead of filtering and actions? The answer is it depends. So it depends on, of course, it depends on QDs, QUs. Yeah, because currently the QDs, or most of the, especially the eager QDs, they have a queue. It's queuing discipline after all. So with the tree, you actually add Q to each point of the queue, which sucks for performance, of course. So yeah, but this can be improved. I think this can be improved. This is the last stuff I mentioned here. We would have to have a queue disk that just does not, queue does not add the packets to any class, but it can have classes. So you will just use filtering to direct packet to classes, and then next filters will immediately trigger and so on. So performance of this would be actually quite comparable to other solutions like Open-V-Switch or others with this in place. But this is not in place yet. Next question back. I didn't get the question, sorry. Yeah, this is because, yeah, so the question was why the filters don't work on class less queue disks and work on the ingress queue disk. So the right thing is that the filtering, the evaluation of filter is something that's implemented in the queue disk. So in the kernel queue disk is a kernel module, and when the packet is enqueued, it's the responsibility of the queue disk, of that module to call filters. And the class less queue disk just don't do that. Yeah? Of course, why don't do that? It's more difficult than that. I'm simplifying a bit. So there are interesting problems like what would happen if you add filter which specifies a class hit and why it would add such a queue disk that is class less, for example. Of course, it's solvable, but yeah. And I guess this is the reason why it doesn't work. Why it doesn't error out? That's the different question, and yeah, it should. Because ingress queues does implement this. It calls filters. Yeah? There was a question here. No. The filtering is done by the filters. So I presented several filters. I can skip back to those. For example, the flower filter that we mentioned. Yeah? So flower, again, this is a kernel module. Yeah? And it does the filtering. So you specify the filter here. Yeah, TC filter at, and this is the name of the filter. And this is the filter that is used. So it's kernel module. It's like C code that does the filtering. Yeah. So those, yeah, so the question was, but how IP table rules compare to TC rules in performance and in what they can do. So yeah, those are just different things. They are at different times when the packet is processing by the kernel. So IP tables have different hooks in the packet processing pipeline in the kernel. So you can do different stuff with IP tables and with TC. Of course, there's some overlap. There is something that you can do with either one. And that's just up to you. As for performance, I would say it really depends on the use case. Yeah? Yeah? Say again, please? Yeah. So this is not, yeah. So this is not action. It's actually Q disk. Yeah? Am I right? I think NetM is Q disk. Yeah. So there is a Q disk called NetM, network emulator. And this is a Q disk that can do that. It just can introduce artificial latencies or drop packets randomly and stuff like that. So yeah, there is stuff. But it's not an action. It's a Q disk. But yeah, of course, we are not for latencies. We could implement, of course, an action to drop packets randomly. I actually showed something like that before. But for latencies, you really need to queue this because you need to enqueue the packets and hold them for some time, which they cannot do in action. Next question. I think several of them right now, it's Melanox, Melanox Mlex 5. Drivers can do flower offloading for some actions. I think it's drop, the F5 remark, and maybe Meret. Yeah. Intel, some Intel cards support U32 offloading with some actions. I don't remember off the top of my head, which actions, again, only with some actions. Broadcom supports some kind of TC offloading, BNX2X. But I don't remember whether it's, I think it's flower too, but I'm not sure. And it's increasing. So yeah, there are cards out there that do support that and it's increasing. The support is still like basic only. And no, only basic actions are supported, not called basic, but only some actions are supported. I guess this will change in the future and it will be more powerful. Yeah. So the question was, better we need this application that would be better to just extend net filter NF tables to support actions. The answer is, it's not, it cannot be done. This is really a different stuff. IP tables are at different stages of the packet processing. Yeah. And TC is actually quite old. I mean, this infrastructure, that's nothing new, that's in the kernel for, I don't know, how many years or tens of years. So that's old. And what we're doing now is just like adding more features to that. So yeah, there's some application. I don't see much application between net filter and TC, but there is the application like between TC and OpenView switch, for example. Yeah. And the answer is yeah, but there is the application and it shouldn't be there, I agree. Yeah. But net filter does different things. So one last question. Yeah. No, the prior Q-disk has always at least three bands, if I remember correctly. It has some hard-coded classes, and it also puts the traffic to those classes based on some classification. It does internally. It's configurable, but you cannot tell it to do not add packets or do not put traffic to classes at all. I do that manually. It's not possible. There's no Q-disk that can do that currently. So thank you.