 testing, check, check, check, my check. I didn't think that we would have this many people. Thank you for coming. I think you guys are here to check your email, that's fine. I understand. I do that too. So welcome everyone. Thanks, thanks for, thanks all for coming. I'd like to welcome two more Red Haters, Rashid Khan and Hanes Sowa, who are going to answer the, what's the big deal about networking questions for us. Rashid, go ahead. Thank you, sir. So welcome everybody. Thanks for coming, really appreciate it. I know there's a lot of talks, a lot of good talks happening right now. So I'm glad that you guys are here. Thank you. So these are not made up numbers. There's a site called Internet Live Stats. So whenever we give numbers to engineers, they try to find out that how bogus they are. So most probably they are bogus and they're a little bit right or correct. I don't know how they do it. But you can, you're welcome to check them out and figure, see if they are real or not. But I found it interesting and that's why I put them up there while we were waiting. Anywho, so what's the big deal about networking? Before I even begin on purpose, I did not put my name on these slides. All the work that I'm going to show is done by a very stellar team of engineers, part of the networking services team and part of the performance team. So I honestly do not deserve any credit for any of this. I just drew some rectangles on some slides. All the good work, seriously. I'm not joking. I'm just here for my dashing good looks, so that's why I'm here. Okay, so networking, as we know, cloud computing, we had a lot, but cloud needs to be tethered to the ground or to be tied to the ground because it needs the packets to go. So I tell people that cloud is great by itself, but cloud also needs a wire somehow. And that wire, that's what we put on. That's the transport that we have. So that's again the life stats. When I took a snapshot, it was 1.6 billion gigabytes. So notice the last two alphabets is gigabytes. So that's how many gigabytes had transferred over the worldwide web that day, and it was in the middle of the day, US time. That's a very large number, that's a very large number. This is not, so let's just have it sink in a little bit. So if you dislike the spinning wheel of loading and you go to a website or something and you have that spinning wheel that says loading, if you don't like that, or if you're waiting for anything more than a second, like you swipe a finger on a Facebook picture or Twitter and takes more than a second, or if you like Facebook, YouTube, Snapchat, Uber, WhatsApp, et cetera, then you're welcome. Again, it's not me taking all the credit. It's all the silent players in the background working on networking, which is quite complicated, making all of that happen. We have started to take it for granted that we swipe a finger left or right and the picture appears, which is many, many megabytes, but there is billions and billions of packets that flow in the background. So there's a silent army of engineers taking care of it for you, of all kinds. So networking is getting more and more complex. It started with kernel networking, then we put some stuff in the user space and let the user space guy play with it, but we controlled it all. But since then, like then we knew TCP, UDP, new protocols, MACSEC, service function chaining, bonding, bridges. Jerry Perkos is asking where is team? Yeah, there you go, team driver, see? So all kinds of different stuff, open V-switch, mega flows, DNS, you name it. It's all kinds of stuff happening. In the other talk, we talked about some of them. Dan Winship brought it up, Geneve. If I name it all, the slide would be full and you would be bored. The idea is that it is behind in the user space networking and in the kernel space networking, there's just noodles and noodles of stuff. It's my job really, literally full-time job to keep track of it and I cannot anymore. Seriously, I cannot. It just so much happens every few months. It's just unbelievable amount of change. In one of the talks before I go on, in one of the other talks I hear that the networking has changed more in the last couple of years than it had changed in the last 26 years. And I absolutely believe it. So, yeah, please welcome. So, again, to illustrate the point about complexity, this is an open stack instance within one host. So you're saying, Rajid, I can't read this. This is like an eye doctor's chart. Yes, absolutely, it's by design. So what I wanted to show was that this is the zoom in view of that. There is bonds, VLANs, bridges, OBS tunnels, etc. So much stuff that is just in one node. So this all thing is actual customer deployment from open stack within one node. And it's just, it needs a PhD just to be able to set this stuff up. We are trying, so I'm encouraging people to go for their PhDs and definitely networking, but at the same time, we are trying to simplify it for you as well. So another thing that is happening is, so complexity is one aspect, and then I'll talk more about that. But how many people remember the dial-up modem? Wow, I thought it would be like few people like me, old geezers would hold their hand, you guys are millennial. Yeah, there's the sound, we all hated it, right? We all hated it, it's still like puts the hair in the back of my neck went up. So why did I put this up? Why am I unearthing ancient history? The reason quite simply is the cable modem changed the telephony industry. The telephone system infrastructure before the cable modem or the telephone modem or the dial-up modem. Before the dial-up modem, it was the telephony system was designed for a two and a half minute call. That's it, but when the dial-up modems came up, people were going online for hours and staying online and the systems were crashing and they couldn't keep up. The telephone system was not designed for that, it was choking. So they came up with this thing called Internet Offload at that time. I was one of the lucky engineers who came out of the college when this was happening, so I was part of that revolution. We were part of a startup, made some good money and we all remember those good days. And then voice over IP happened, etc., etc. So the old model was commodity hardware and commodity software. I was part of a big giant German company, not Volkswagen. But and we really built a 320 euro gateway. 320 million euro gateway. And that was completely proprietary to take care of voice over IP, internet offload, etc. So that's the amount of money it was going to take to do the internet offloading. Nobody can afford to spend 320 million euros on telephony anymore. It's getting cheaper and cheaper, all the services are getting cheaper, etc. So what is the new trend? What is happening is this is a new revolution that's happening. That's open source hardware like commodity hardware, Dell servers, HP servers, whatever kind of servers, and open source software. Because there's no way we can compete on a new, there's no features that can help the network equipment providers make money. They want to reduce money, everything is about reducing money. So all the cloud stuff, all the things we talked about, etc., everything needs three things. One is CPU, one is networking, and one is storage. So Tom is sitting here, he'll say that the storage should be the middle piece, or if Linda was here she would say that CPU should be the middle piece. But of course I made this slide, so the networking is the middle piece of the leg, or the middle leg. So, but on top of this stable system, this is like the stable system that is needed on top of that. There is software layer upon software layer upon software layer. There's just layers and layers of software, and the complexity is increasing. So what is happening is the poor CPU, which is this tiny truck, has all that stuff happening on top of it. And it's an overloaded, and the axle is breaking, and it's breaking apart, etc. So we have to have a solution for this. What are the solutions? We'll talk about it a little bit later. But I just want to make you realize that having commodity, I mean open source software is one thing, but the layers of software that is needed to go all the way to the cloud is just mushrooming. So sports car, so changing track a little bit. I know I'm jumping, time is running short. So sports car, if somebody was to design a sports car, what do we need? Oh, we need it to be buzz, we need it to be sexy, we need it fast, we need it to be fun to drive, and we need it cool. Boom, there's your sports car. It's a Porsche, wow, excellent. But then somebody like me gets it is two children and skiing and golf and children's backpacks, etc., and has to work. So by the way, why did I put this came out of order anyhow? The sports car is the DPDK. It's fast, it's buzz, it's does a lot of stuff. It gets the package to the user space super fast, but it does not do a whole bunch of stuff. So now coming back to your car analogy. Now if my wife say, it should carry the luggage, okay? It should have four doors, okay fine? It should be easy in and out, it should work on all seasons, like dirt roads, snow, sand, whatever. And it should have low emissions and fuel efficiency, okay fine. So what happens? There you go, you end up with the sedan. It does a whole bunch of stuff, might not be as cool, might have other problems. But that's our old, sorry to say old, but that's our kernel networking, or net dev that we knew about. So new stuff is coming, but still, guess what? The sedan is still there, the net dev kernel is still very relevant, and we'll talk about that. So the kernel networking is listed on the left-hand side. From the IP cloud, it goes to the NIC driver, goes to kernel networking, crosses the user space boundary, goes to OpenVSwitch, and goes to the VhostNet QEMU, and goes to different virtual machines, or goes to the different containers. The picture pretty much remains the same. These pictures are simplified for a reason, just to illustrate the point. I know there's so much more complexity to this, but I simplified it just to show you a cheap example. Then the middle column is the DPDK plus Vhost user. So that is another model in which DPDK takes the packet directly from the cloud, shunts it to the user space, and guess what? Everything else is in the user space, boom. No more kernel, nothing, no packets, or yeah, no packets going through the kernel networking. No problem, okay, happens, it's okay. Then the third one is device assignment, SRIOV, Dan mentioned it in his talk, I think there are other talks, been there, done that, direct assignment, packets again going directly to the virtual machines. So why should a user or anybody pick one or the other? So there are pros and cons, as usual. So on the left-hand side with the kernel networking, you have all the pros feature rich, 26 years of development, open source, all everything you can imagine is there. On the DPDK side, packets fly directly to user space, the user space guys take all the control, they do everything, but there are still some cons. It has limited off-loading support, the TCP stack is not there, there is work in progress for migration, et cetera, and device assignment also has pros and cons. You can read this offline, I'm not gonna bore you with by line by line items, but there are pros and cons of everything, as in life. The good thing about Red Hat or the Red Hat solution is that the way I view it is that we offer a buffet. You go to a restaurant and there's a buffet, you can pick and choose what you want. So all these three options are available for our customers and partners to use, and we will support them till the cows come home, as they say, we'll support them day in day out. So DPDK is fully integrated, DPDK plus OVS fully integrated, OVS without DPDK is integrated, OVS with kernel networking, all of this is available. Now, coming back to the pros and cons, so ultimately Rajat, what is the packet rate? What is the throughput rate? Okay, fine, our stellar performance team has been doing a phenomenal job getting us the results that we need. So as you see from the left hand side, I'm just gonna compare the two because it was getting too complicated to have SRIV in there, but we have those data, we have that data as well. So on the left hand side, by the way, this is before I start, this is a 64-byte frame, and the theoretical limit on this, on a 10-gig link is 14.88, so just to give you a reference. I put that in the bottom of every slide. So the theoretical limit is there. So for ultra low 64-byte packets on the kernel networking side with layer three routing with the recent work that went in for 7.2 and upstream, Alex Dike did a tremendous job at it, we are at the almost the theoretical limit, you know, 14.88 versus 14.12, no problem. Layer three routing, solved. Then again, we move up to OVS, we are at 9.05, and then crossing that boundary into VhostNet, we are really dismal, the performance goes down tremendously. No problems, we are working on it, it's not the end of the world, we can solve that problem. We talked about it yesterday, it's there, we have plans to solve that, hopefully. This mic is really good, man. I have to note myself, honestly, be careful. So on the DPDK side, all the way to OVS, it's reached the theoretical limit, no problem. When you go to the virtual machines, again, performance goes down 4.21, it's a tremendous loss in performance, but hold that thought, I will cover that again. Now, some people might argue that 64 byte frames was not realistic, 256 might be okay, okay, fine, no problem, we have that data as well. Now, the things start to look better. Why do they look better? Because even a tiny frame, you're doing a overhead, Allah, there's a lot of overhead of pushing the packets. There's no batching, there's no nothing, et cetera, many different reasons. So now, with 256 bytes, the data looks okay, but that 0.77 number is still pretty bad, but the theoretical limit on this size of packet is 4.53, both on the OVSI, with or without DPDK, we are okay all the way to open V-switch, but the V-host boundary when you go to virtual machines, the 256 is better, okay? We are working on it, no problem. Then we go to 512, the numbers again become better, the gap is closing, with or without DPDK, no problem, things look good, and then I'll rush forward and go to a typical size, MTU size, which is more typical in a realistic environment, and now the numbers are very, very similar. So just a point to note again, these are the, I wrote down the theoretical limit at the bottom, and as you can see on the MTU size, which is 1500 bytes, it all makes sense. All the numbers are at theoretical limit, so job well done, let's go home. No, no, no, it's not done, we still have a lot of work to do. What are we doing? So why is that this one? So going back to this one for a second, see that 4.21 number, why is that performance so bad? Why is it that when we cross the V-host user boundary, there's a tremendous loss? Yep, we can explain that. So that's because the V-host user, all those packets are going through a single queue, and there might be many different CPUs which are idle, but one CPU is pegged all the way to the sky, and it's crunching all the numbers, and that is the max it can do on a X86 server. So how do we solve this? You will say, Rashid, very easy, you have multiple queues. Yeah, voila, there you go. We have multiple queues, it is V-host user with multiple queues so that you can open up many different CPUs to do the number crunching or the packet pushing or the packet transfer. So no big deal, we can do that easily, and now we are again close to the theoretical limit, it's no problem, and we are working with our partners in Nigeria, VMware, Intel to make this number go higher as well. So we are constantly trying to improve the performance of the system. It's part of our mantra, part of our charter, we lose sleep at night trying to push the packets forward to virtual machines, open stack, you name it, all the way to the containers. That's why we are earning our bread and butter these days. Okay, so coming back to this for a second, so you say, Rashid, that's not really good because now if you are a big bank and you bought a gigantic system and you're going to write the next killer app, you're going to write the next Facebook or the next Google or next whatever app, and all of a sudden you are taking four of my cores just for pushing packets to virtual machines. That's not fair, what am I going to do with the rest? If I have a eight core system, you are taking four of them. Yeah, that's a problem. So and it's, I can, it's an understandable problem that you are using so much CPU power just for packet pushing, where am I going to do the app? Where am I going to run my Java, JavaScripts, all of that stuff? So they always need all the CPU. Anyhow, so there, that's our solution. That is what we are working on. One of our friends, Judy Perko, he came up with this concept called Switch Dev. He has been working on it for a while. What it's going to do is it's going to enable hardware offloading. So instead of all the work being done on the main CPU, we're going to offload it to the different NICs, which will help a lot. And we are working with our partners. There are some partners who already have solutions. We are going to start working with them to make this happen. Like they have it working. Now we are going to red hat. We are going to work with them to make it part of our red hat distribution. So this is already in the works. Hopefully when I go back to Westford, patches will be available. We can run with it. And Judy Perko is sitting there smiling, saying no way, man. So, nothing but net. It is a basketball reference for people who might not understand it. Nothing but net means like you hit the basketball, you threw the basketball, it went swish in, and it didn't hit the rim. It just went directly two points. So nothing but net. We are NFV ready. So NFV is the big push. There's trillions and trillions of dollars, gazillions of dollars to be made in that place. And that is the big push for OpenStack. Even some people are saying every phone call is going to be a container. Some people are saying every app, everything is going to be on a virtual machine. There's all kinds of grandiose ideas. Fine, no problem. We are NFV ready. We have the building blocks ready. Their packets are being pushed at theoretical limits. I already told you about it in the previous slides. And now some people are skeptical that, okay, OpenStack, yeah, it's big, it's bulky, what am I gonna do about it, et cetera. But there are actual operators, networking operators who are coming to us and saying we are going to use OpenStack, we are going to use REL, we are going to use DPDK, and these are the things that we need from you. So it's not that we are trying to push it. The network equipment manufacturers, NEPs, network equipment providers are coming to us and saying this is what we need. And they are really making a big deal out of it. I have a quote from South Korea Telecom. It's called SK Telecom. I used to work a lot with our Korean customers and partners in my previous life. They are the most conservative people. They will never tell you what they have in plans unless they have it done. So this is their part. So they, there's a guy, I forget, he was VP of NFV development, et cetera. He's on YouTube, you can watch it yourself. He was saying that as part of their 5G deployment or their 5G plans beyond, they will have software-defined networking, they will have NFV, OpenStack, it's centerpiece. It is what is being used for all their next-gen stuff. And if we consider that we are already in 2016 and they are planned to deploy it in 2020, that means that the field trials are going to start now or are starting. And yes, that is true. We have actual field trials starting with all of this stuff now, all the way with OpenStack, OpenShift containers in a virtual machine, you name it. So you'll say, hey Rashid, what about, why you mentioned so much about virtual machine? You didn't say anything about containers. Well, no problem. So here's a slide for containers. There's a whole track of talks about containers, containers networking. Dan Winship already did one. If you missed it, it was a very, very good talk. It gives an overview. I highly recommend that you watch it. There's going to be another one by Rajat sitting in the corner later on in this room. But I wanted to give you a simple view about the container side. The good thing about all of this is before I even begin is that the containers, we don't see the performance problems. Containers are designed with a very simple network. You pop one open and it comes with an IP address 192.16 here, very simple. No problem. You have a container, voila. The problem comes when that container has to talk with other stuff. So on a laptop, popping open a container, no problem. You don't need anything. It has a tap device outside. It can talk to the World Wide Web. No issues. Thanks, Anas. I thought I ran out of time so it was a warning or something. Yeah. So in containers, that picture is much simpler. But the simple networking of the containers, as you see the 192.168 address, that still has to connect with that big giant ball of networking or big giant Godzilla of networking that has been created for the last 26 years. And we have a solution. It does connect. It does connect. It does work, host to host, node to node, pod to pod, namespaces, you name it. Throw all the layers of isolation, migration, you name it, on it, multi-tendency. It all works. We have it working. We have it deployed. OpenShift is shipping with Live, OpenShift Live right now. And we have tested it with thousands and thousands of nodes already. And it does scale, doesn't seem to be any problems. So that's pretty good. But if you are interested in more in containers networking talks, please stay in this room. And we'll tell you more about it. So with that said, I'll shift it over to my esteemed colleague, Hannes, he's gonna tell you about what is, you have another one. Oh, okay, that's fine. I can also just use this. So he's gonna tell you more about what we have in the pipeline, what you're working on, and how maybe you can help us. Hello, my name is Hannes. I'm part of Rashid's team. And a short introduction, what we are looking for in the next few, in this year, next few months, and also maybe next few years, what will come up. So our daily butter and bread job is like, we have to care about the networking stack, which basically means everything what pushes packets in and out. We need to fix stuff, not only in rail, but also very often upstream. So we need to look ahead and also care about what rail eight will look like. So even so, we can't push everything into rail, into Reddit Linux, we need to make sure that at the point when we try to work on rail eight, we have a networking stack, which we can use on. So we try to work with the upstream community to fix bugs early and as fast as possible. A little bit on our agenda is like, we want to look into security in the cloud more. So Sabrina and our team developed a MACSEC implementation, which we now try to push upstream. We will have to work further on it to also like specify how crypto keys can get negotiated even if you use it in the cloud, like not only on the Ethernet layer, which will be solved quite easily, but if you also do that in tunnels, then there are some kind of proposals for Wixlang crypto. So you can actually encrypt the Wixlang tunnel themselves. And people also look into IPSEC, IPSEC offloading. So you actually can do the crypto processes on the networking card. So the networking card basically gives you an encrypted packet and verify packet already. The problem here in Linux is like, we have like all those offloads, but sometimes it's very hard to actually make use of them because the offloading extension sometimes makes use of the floating point unit. And it's sometimes hard to get that in the receive pass because it's only used, actually designed to use in user space. So we are looking also in performance enhancements there. Big thing always is performance. So performance is like, I would divide it into two categories. You always have like the software performance, which is like we need to improve or we look into how to improve the software code. And then there's always the possibility if the software cannot keep up with what networking wants to do, the idea to push stuff into the hardware and just let the hardware do it and the software tries to come up or tries some kind of same configuration, tries to provide some same configuration interface down to the hardware. So we have, for example, switch dev, which tries to have a common interface, a generic interface, how to configure switches in the Linux corner. So it doesn't really matter which switch you use. You can transparently use all kinds of switches. And this then can get integrated further more into the whole stack. Like for example, it should be possible that OpenVswitch can then, for example, offload complete flows to the hardware or the bridge can pre-establish flows which directly go through the hardware and won't ever touch the software stack. Problems in there is like, in a lot of cases, we need to have some kind of fallback path to software through software. And they are just important that we actually synchronize all those hardware and software's low pass in a way that they don't actually break stuff. So the hardware and the software needs to work very, very, they need to have the same properties and they need to work, yeah, the same. They would, it doesn't, it should not matter where the packet gets processed, basically. So on the software side, what we, what was seen last year, and we try now to also get back into Rated is like, we have seen lots of TCP and socket enhancements. So one of the bigger projects is that the TCP listener lock was removed upstream which now provides kernels which make heavy use of TCP servers with a much better capability to scale up to concurrently receiving lots of more, lots more connections that was done mostly by Eric Dumasier at Google. Also like small patches finally hit the tree. So for example, there's no way for like the Thundering Heart problem in E-Poll to finally make sure that for example, if you get a SIM packet into a socket and you are E-Polling or Polling on the socket that only one thread gets woken up and not all threads which are using this listening socket. So that is actually helping a lot in a lot of scenarios. It was contributed by Akamai. There's more of small enhancements which we try to, or API enhancements we also try to put into the kernel or then later on backport to make more scalable software possible in user space. The more enhancements which were like pushed last year need now to make, then it's now a way that we actually make more use of them, expect more survey that we actually bundle packets. So we don't just process one packet when we receive it from the networking couple. We bundle packets and try to steer packets through the networking stack at the same time that helps CPUs so they can make more better use of the instruction caches and also provides us with caching probabilities so that we for example can do one routing lookup and if we select the second packet that has the same destination that appears to have the same incoming interface we can just use that same routing lookup for the second packet and so on. So we try to do more caching memorization tricks there and Jesper is also working on better memory allocation strategies there basically also in the direction we try to steer more packets on the CPUs. I have a per CPU allocator batch than the memory allocations and try to process as much as packets as possible on the same CPU so we can have better CPU cache utilization and usage. On the hardware side like I'm actually not sure how where this is going so what we see right now is that SwitchDev and OVS might become friends and we finally see some way that hardware can offload more and more and more from what the user space currently configures to the kernel. Yeah basically we will see if this will be realized I don't know yet. EBPF and Perf enhancements are probably happening this year so Facebook is currently pushing a lot of money in developing introspection or observability features especially for TCP right now and the first tools are actually no available upstream in the recent Linux kernel so you can now use EBPF code actually and add them to specific Perf points to collect data like you can build aggregations inside the kernel before that it was only possible to export on a hit of a probe every data to user space and then do the aggregation there and that was difficult because you could easily export gigabytes of Perf performance data and now you can do that in the kernel and the kernel just pumps up an integer. And then discussions about protocol generic offloading so everyone of you can design their own tunneling protocols which is pretty cool. Then a big point probably this year will be that there's a lot of complexity nowadays in the kernel how to correctly configure that and so for example Fedora already provides like Tune which is a user space scripting solution which helps people to actually tune the kernel correctly and we will need to push more and more options and to tune so the kernel after installation actually behaves like in a performant way for specific installations like what you want to have. That's especially important for those layer products like OpenStack and OpenShift where currently like default configurations are often very used and those don't provide the best performance actually. Yeah, so hopefully stuff should work better out of the box and doesn't need that much hand configuration in future. So from the specification side what might come up I don't think it will be ready this year is like there will be much more flexibility also in the data path like how to describe flows and how what actions actually to take so we will see like maybe programming languages like P4 where you can specify how to actually match packets being integrated in OVS at some point with the help of this EBPF language that could happen and also like more and more possibilities of what you do, what you can do in actions. So one thing for example is like in service function chaining you are now able to actually put metadata on packets which then another OVS instance can read again and can act up on that. So you can actually, switches can communicate over metadata in packets via to each other and people can build things out of that as building blocks. Other things which will definitely come up this year is like Internet of Things. So Linux currently supports like two protocols mostly the one is six low palm which is like IPv6 based protocol which is built for networks which transmit on a very low bandwidth, so they do have specific features like for example, don't provide like the same kind of neighbor discovery but a very complex reduced ones. They have IPv6 address compression and things like that and Linux already supports it over the IEEE 800-215-4 standard which is also used as SIGP and on the other side we have like Bluetooth low energy which also might be used in the Internet of Things world quite a lot they are preparing for with a lot of profiles where we have basic kernel support already but lots of profiles will need to be added like for specific applications. So you will find profiles for like those health devices which specifically like monitor your heart rate and stuff like that. Another thing we actually decided to look into again is like Wi-Fi performance. So with the reason of bonds of 811 AC like gigabit speeds now on Wi-Fi and that even going up we would also need to check that the Linux kernel can in future handle those speeds. Some of the wireless drivers are already very, very complex but actually don't implement all those features have implemented in the high end data center NICs which are used in the data center so we might look after some of the Wi-Fi drivers and also check if we now can actually use those features for example in Wi-Fi. I think that's enough for this year hopefully. You're on time. So if you have any questions for Rashid or myself I'd be happy to answer them. I have one slide left. That's okay, you have the mic. So please join us. If you are in this room that means that you are at least made the first step of trying to figure out what the heck is networking and what we are doing. We, I guarantee you it's the fastest path to become a celebrity rock star upstream and in real. Tim Burke had a keynote about this morning about being a rock star. Yeah, absolutely. This is a place where not many souls have endeavored and if you think that packet movement is something that excites you, join us. We would love to have you help. We have openings in our team. It's a shameless plug for. But you also had trolls. Yes. Yeah, anybody. And I guarantee you one thing. Like when I do an interview or phone screen or someone they say, oh what will I be working on? Say I can tell you what you'll be working on in the next three to six months but beyond that if I told you anything I would be lying. Because the reason is honestly every three to six months it changes tremendously, seriously. In the last week even, even before after we came back from the holidays, my world shifted 90 degrees and we all had to focus on different things. Things come out of the woodwork. We have to adjust. It's really interesting stuff that we are working on. So please join us. There are other networking talks. Friday, Saturday, Sunday. Please listen to those and give us feedback. Give feedback for the other talks as well. Another thing which I just want to mention is like we also, Flugan does amazing work in the NFT world. So replacing the old IP table stuff where, I don't know if people remember like the Diploma Thesis about HypecNG where they try to algorithmically advance IP tables to be really, really fast. And that is kind of happening now with NFT. So if people want to now actually reduce like 16,000 rules of IP tables in 24 they should definitely look at in NFT. And Flugan, please make that happen. Absolutely. Questions, comments? There's a nice scarf here for you, for questions. Bribe, no? It cannot be that I explained everything so well that you don't have questions. That means that you were checking your email. Which is fine. Yeah. Okay, so please feel free to ask us questions on the side, et cetera. There are other networking talks, as I said. And thank you very much for coming.