 Super Bowl, as it's called. It's next Sunday, not tomorrow, but the one after. They usually make it the whole, all the other teams are jealous and then all the conspiracy theories, blah, blah, blah. But the very good thing that I really like about this team is that they really function as a team. There are superstars in this team, like to take a soccer reference or the real football reference, there's like Messi and Ronaldo in that team. The quarterback is married to a supermodel, really fantastic players and very celebrity kind of people. But the great thing about them is that when they enter the stadium, they're announced as the New England Patriots. And they enter as a team, whereas the other teams, what they do is, here's Ronaldo, yay, here's Messi, yay, whatever. But for the NFL guy, I mean, for the Patriots, they all entered as a team. So these guys are entering the stadium as a team. They work as a team, they're solid. Why am I telling you all of this? Is because this slide deck is nothing, not much to do with me. I just took collaboration, collaborated effort from everybody. This is really a team effort, this whole slide deck. So if there's something wrong in it, I get to blame my team. And if so, you guys caught me. OK, team effort, lots of contributions from the team. Anything. So what to expect in this presentation? So I'm going to talk about what 99% of the universe needs. So by the way, the title is not clickbait. I do feel strongly about this title, and I'm going to talk about this also. So what 90% of the universe needs, what 75% of the universe needs, but doesn't care how it's done. And what we have to do, all of us, Linux, networking, whatever, to make it possible, that we don't want to know how it's done, but we still want it. OK, fine. Then we are going to do some boring stuff or the real technical stuff a little bit, because there are so many developers here. I don't want it to make just a fluff presentation, as they say. So we're going to talk about differences in some packages and actual technical information. And then I'll let you be the judge that, did we really need this new package or new protocol or new something, or is it more of the same or something? But I'll let you decide I'm going to just present what the differences are. And then lastly, coming back, wrapping up, I'll give you some things to think about. I know some of you will roll your eyes like, oh my god, this is management giving some of their crap, et cetera. Please feel free. I've seen my teenage children's rolling eyes, so I'm used to it. I won't be offended. I'm used to it. OK, so what 90% of the universe thinks it needs? I just want an external address to talk to the rest of the world, World Wide Web. And I need an internal address to be able to talk to containers or VMs or other things in my laptop. And easy peasy, just make that happen. All these blue waves in between, I don't care. I just want to be able to talk to the rest of the world, and I want to be able to talk to within my computer. That's it. Easy, fine. What they really mean, 75% of the people, what they really mean is that I want security. Yes, I don't want it to be hacked. I want privacy. Again, I don't want it to be hacked, et cetera. Multitenancy. Before I go on to these, so one of our board members, her name is Charlene Bagley. I'm sorry if I'm butchering it. She said something very interesting. We were in a meeting with her. She said that IT, I'm paraphrasing. This is not a direct quote. She said that IT is now discussed, IT networking, network security. All of that stuff is discussed at the big giant company's board level, like the board of directors level. It used to be pretty much, does my printer work? Is there a printer jam? Can I connect to Wi-Fi? Is the email working? Fine, if it isn't, if it is, great. But now it is talked at like board of directors level. She's also on the board of directors, I think, of NASDAQ and some other companies. So it is discussed at that level because it is such a big deal. Nobody wants to be hacked. Nobody wants to be in these problems. As you have seen the headlines, apparently the US elections were influenced because of hacking scandals, et cetera. So nobody wants their company data and private. So Red Hat is good that everything is open. But still we have financial data, internal communication, et cetera. We don't want somebody to take it. So security, privacy, multi-tenancy, fast. It has to be at least speed of light. If it's more than that, it's OK, but at least speed of light. Switching, routing, load balancing, fairness. If I move, things should stay up. Like I take my laptop or I move somewhere else. No problems should happen. Fine. If the environment moves, things should stay up. Like if my IT guy changed the Wi-Fi or whatever, changed it to N instead of GE, I don't care. It should not get in my way at all. And it should have downtime, of course. And it should have zero packet loss, of course. And zero latency, which has to be speed of light, as I said. And it should be able to walk on water. Easy peasy, right? So this is a picture of a packet walking on water on speed of light, as you can see. But other than that, anything? No, that's OK. This is just, let's start with that. That's what they need. So what we have to do to make all of this stuff a reality. So we have to make it easy config, right? Config, if it's hard. If you have to open a command line these days, it's a no-go. Tunnels, we have to create bridges, bonds, namespaces, all of the stuff that, again, you are now saying, I don't care about. But seriously, we have to take care of all of this all the time. OK. So now, before I change topics, I just give you a perspective from what people perceive they need, what they really need, what we have to do about it. And now I'm going to switch gears a little bit and talk about the different packages, different protocols, different setups that we have available and why they are available. Now coming back to the theme of this thing, while you are going through, I'm going to go through a set of slides to show different packages and what the differences are. For example, network manager, et cetera. As you are viewing this, a couple of things I want to note, because I'm going to forget before I start. I'm going to reference at the bottom. This is just like a snippet of the differences and information. There is tons more information available, and there's going to be a link at the bottom. I'm going to make the slides available, of course. And you can click that and get tons of more information about it. And the other thing is, as I'm going through this, I want you to just think about it, that are these packages mostly the same, another slant to the same thing? Or it really is very different? And yes, there was a need for it, and we really needed it. So I'm not going to be the judge of that. I'll let you be the judge of that. Let's go through them, instead of just talking about it. So configuration and management, we have three choices. One is network manager, of course. It does everything in the kitchen sink, and it does it very well. And for those people who weren't here in the first two minutes, we have nice network manager stickers. Please feel free to take one. Of course, when there is a sticker, the project is real. So after 13 years, network manager is real, because we have our sticker. So it takes you out of the Wi-Fi, Bluetooth, everything. Again, the environment changes. It scans the Wi-Fi as you leave, et cetera. It's extensible. It has D-Bus interfaces. It's the same, and the good thing is that a lot of, even I didn't realize it until recently, is that it is in Fedora. It's in Rel, of course. It's in Ubuntu, Debian, and even small devices like Raspberry Pi. So it's used very extensively. Sorry, wrong direction. And then we have comparatively network D. Again, very important. It is minimal, small footprint, does a lot of good things. It comes with D-Bus API control, tiny, as I said, lightning fast, used in container land a lot. Doesn't do all the Bluetooth and Wi-Fi, but for very good reasons. So that's network D. So network manager versus network D. As I said, I'll let you decide. And then, oh, stickers. Here's the sticker again. And then IP route. That's the third one. Third config manager. Again, this one is more suited for kernel hackers, et cetera, a lot of work that you have to do. The fully manual configuration, it does not remember state. So you have to take care of all of that yourself. Some drawbacks, some good stuff, but a lot of people love it and like it, especially for kernel hacking and where you just want it to work. You know what you're doing. You just want to set it up and you have a script that you want to do on top of it. OK, so that was configuration management. Let's move on. Oh, there's an example for IP route. Again, as I said, there is more information. I have links on all my slides where you can get more information. So let's compare Libre Swan with open VPN. First of all, anybody who's a Red Hat employee, your VPN will continue to work on April 1st. If you don't know what I'm talking about, that's a good thing. So they're changing the corporate VPN, et cetera. I talk to the NM guys. Don't worry about it. Everything is fine. It should work. No worries. Famous last words. My team members, please don't make me eat these words on April 1st when the switch is. But open VPN versus Libre Swan, many differences. I'm not going to go over the whole alphabet soup, et cetera, about it. Some similarities, some differences, very important. But you can get more information as you want at the bottom I've placed it. IP tables, firewalls are super, super important. We have been familiar with IP tables for a very, very long time. They are used extensively. There were some drawbacks in IP tables that got filled by NF tables. For example, there had to be different config and different setup for IPv4 and IPv6. Whereas NF tables, it's all combined, one and the same thing. So combined, iNet family can be used for both IPv4 and IPv6, which is great. For container space, it's good. There is not a happy migration that you can do from IP tables to NF tables. We are still measuring some performance differences or performance benefits. It seems like NF tables might be faster. It is 90% feature-comparable, feature compatibility, or feature-complete as compared to IP tables. And for the remaining, 10% work is in progress. OK. Moving along. Whoops. Security, MacSEC versus IPSEC. I don't want to make anybody. IPSEC, well-known, well-established, been there for a long, long time. And the choice really is, if you want to do L3 security versus L2 security. And then we have MacSEC. And before I forget, Sabrina had a talk last year on it. And she also has another one this year at 1 o'clock, very soon, in the same room. So if you are interested in learning more about MacSEC, please stay back or come back at 1 and you'll know more about it. But IPSEC and MacSEC, these are two different approaches. They provide encryption, as you know, for packet encryption, back and forth. And I'm already boring you with all the details. But for some use cases in the cloud, layer 2 might make sense. For other use cases, layer 3 might make sense, especially for control plane. It depends on how you set it up. So these are very important things to consider when cloud deployment. And people are asking us for both. Bonding versus teaming. Jerry Percot is here. Jerry, I left the slide in there for you. So again, many differences and similarities. The biggest difference between bonding and teaming is that bonding, everything is in the kernel space. Team driver, the control plane, or the setup and management is in user space. Performance is one and the same. It's pretty easy to set up, et cetera, et cetera. Overall, it's good. I'm doing OK on time, which is good. Time sync, NTP versus PTP. NTP, again, been there forever. Everybody uses it when you turn your computer on, laptop, whatever. It gets the time through NTP. And then that wasn't enough for many use cases, especially for high frequency trading guys, et cetera, to time sync the, I don't know, Moscow stock exchange versus New York's exchange, versus the Shanghai stock exchange. You have to make sub-nano second decisions on stuff. We have all seen the glitches where $2 billion were lost because of this, because of somebody hitting fat fingering, some stuff. So there's a lot of money at stake and time, of course, is of the essence. So PTP provides a lot more granular time-based syncing. It comes with GPS sync. It comes with much more precise nature. So for things like, I just gave an example, like for Hydron Collider, for military uses, for ultra-frequency trading, as I mentioned, PTP is your friend. Of course, it needs some special hardware. It doesn't come for free. But it is essential for some use cases. Millisecond versus maybe picosecond, but definitely sub-nano second. But it can be as precise as a picosecond. OK, so tunnels. So why do we need tunnels? This might be like, what, Rajan? So tunnels we need for host-to-host communication. So imagine that you have a whole bunch of virtual machines and a whole bunch of containers and a host. That's no longer enough, right? So you need to talk to other hosts with a whole bunch of virtual machines and a whole bunch of containers. All well and good, but many choices, as usual. So there is VXLAN, well-established at this point. A lot of Silicon vendors have VXLAN offload cards. So you might be wondering, why is there a problem? We have had VLANs forever. So here's a question to anybody. Why is VLANs not enough? Why did we ever need VXLANs? Yes, somebody said because of the 4K limit. Yes, VLANs come with a 4K limit. And it is quite conceivable that in a cluster, 4K is not enough at all, virtual machines, 4K containers. That's chump change, right? It's like I have more cluster, more virtual machines in my basement than 4K. So you needed more. So VXLAN was available. VXLAN was pretty much a drive because of that. And then it didn't have, it's pretty good. It does the job. It is part of our solutions for OpenShift and OpenStack used extensively, as I said, hardware offload cards. And then the new protocol that's coming up, which is used with OVN, especially, is Ginev. It's developed by some of the original guys from Nigeria. It's pretty flexible. It is more future-looking. It has an extendable header. It comes with support for future stuff like NSH and service function chaining. So it is more future-looking and advanced. But the drawback is not many hardware offload cards. There's customer entrenchment, et cetera. More information at the bottom. Then we are also working on Q&Q, which solves the VLAN problem, much, much lightweight than VXLAN. 2 to the power 12. So the 4096 problem is gone now or can be gone. Eric Garver, who has a talk not on this subject particularly, but he has a talk later on on CPU utilization, I think. Something to that effect, I forget. But if you are interested in Q&Q and what has been done or what he is doing, feel free to please talk to him. He's going to be around this afternoon. What else? Oh, yeah, NVGRE. So NVGRE is also used a lot in data centers, particularly Microsoft-centric data centers or Microsoft site of families. We do support it. Again, many drawbacks and good stuff in there. OK, now we are getting to the interesting parts or at least more interesting to me parts. So we had OpenVSwitch for the longest time, part of our products. It is being used. So what is VXLAN? It's switching between hosts. So it is like if you have many virtual machines, you need to switch between them. Have the same package goes in, that goes out, has learned. And then it repeats. It learns and repeats and flows based as DN connections all the way to ODL. All thumbs up, very nicely done. We are working on OVS offload, hardware offload cards now. It's so mature that people are investing and putting out silicon and cards, which can do OVS offload. Huge success based part of OpenShift, our container solution, part of OpenStack, our VM solution, and also going to be part of Rev, which is also another VM solution. So really nicely done. All thumbs up. We have in-house expertise, plus very nice work with VMware, et cetera. Lots of collaboration with them. Another good thing about OpenVSwitch is that it was originally designed for kernel data path. We'll talk about what the differences are. It holds that thought. It was originally designed for kernel data path, but works very nicely for DPDK data path also. And now the new thing that's coming up is VPP. VPP is developed by Cisco. Very different design. All user space, all dependent only on DPDK. It sort of can work with kernel data path, but it's not designed for kernel data path. It is designed for working with DPDK. All the packets go to user space and then switch from there. It is gaining some traction. We are evaluating it. We are looking at it. There is competing information. Now people are doing comparisons. Some of our partners, huge partners, are saying OVS plus DPDK is enough. We have measured it. It's faster than VPP. And then some of our other huge partners are saying no, no, no. For some applications, you need VPP. You really, really need it, not even want it. But the jury is still out. As is the usual case with Red Hat, we want to remain partner neutral and vendor neutral. So we are evaluating it. We are thinking about it. We are investing in it. What happens is that we have to invest a lot in these packages. It's not just enough to download it from upstream for making them part of RHEL and especially our cloud offerings. We have to do a lot of work to harden them, test them, get them to a point where, as Denise said in the earlier session, that we have to take the call at 3 AM. And for that, we have to harden them, test them, et cetera. So we are looking at it and trying to see how much investment we need to do in there. So these are two competing standard switches, switching standards at this point. The jury is still out. Which one is better? Please ask me again in a year or sooner. How many people attended my session last year? I could have gotten away with it. So these are slides now from my last year's talk. So as you see, the background is different. I copied-pasted from there. But there are some people, so I have to fess up. I pretty much copied them from last year's talk and stuff. So this is a good illustration that shows the differences between KernelDataPath and DPTKDataPath. There was a talk yesterday that talked a little bit about that also, but this gives a visual representation of the same thing. So packets have been traditionally going through the KernelDataPath forever, from outside world into the Nickdriver, into the KernelNetworking. And then it crosses the boundary, goes to user, open vswitch through the vhost net QEMU, goes to different virtual machines. Easy peasy, no problem. But then it was conceived that it's not fast enough and it's very difficult to work with the kernel guys. We need something faster, we need more control, et cetera, et cetera, for very good reasons also. I can make arguments both ways. Some packets go directly to the PMD driver into DPDK and then onwards in the middle diagram. And that's pretty cool too, no problems with that. It does give all the control in user space. No issues. But this problem might be that then you need everything in user space, okay? What does that mean? Hey, what about TCP? Yeah, now you need TCP in user space. What about vxlan? Oh yeah, now you need vxlan in user space. It doesn't make sense to get the packet into user space and then shove it down and shove it back up. So the stack in user space is being created. Some people might call it redundant, might not be. For NFV kinds of applications, DPDK does make a lot of sense. DPDK also makes sense inside the virtual machine. But for many use cases, for traditional customers, like banks, et cetera, it might just make sense for remaining on the kernel data path. But the third one, the rightmost column is device assignment, also known as SRIOV. This is direct connect, you know, like you'd get the packet memory sharing, et cetera, directly into virtual machines, no kernel, no DPDK. And most of the people who need real-time applications, high, low latency, et cetera, let's say telcos, they are still on SRIOV and the device assignment. What they are asking for is to be able to move or easy migration path from device assignment into OVS LAN, into DPDK LAN. As you can see in the right diagram, our rightmost diagram, there is no switching. So it's direct connect into VMs. So it might mean that if you have to go from VM to VM, you have to go back out the box into the top of the rack switch and then come back. So there is latency involved in going to VM to VM. More information about all the three diagrams and some of the stuff that I blabbered about in here, pros and cons. There were some items which were work in progress last year, I'm happy to tell you that they're all done or mostly done, 99% done. When I say done, I don't have any PM requirements at this point for the major items. They are requesting some stuff, but there were some showstoppers or must-haves last year when I presented this, but now most of the must-haves are being done and now it's just improve it, make it better, make it more stable, which we are working on. XDP, I requested a bulletproof jacket, but the budget wasn't there. So I'm gonna skip that for now. Let's talk more about XDP next year or in six months. The jury's still out, it's a new concept. I just wanted to show the seed that yes, there is another alternative that is being worked on. Please look it up online. We'll talk more in about six months. Okay, I think I'm doing okay on time. That's good. So I talked a little bit, I had to balance this talk, as you can see, some fluff stuff in the beginning, some fluff stuff in the end, and in the middle I gave enough technical details and I rushed through them because I didn't wanna bore you with like NVGRE and what is the third bite of the third syllable, et cetera, et cetera. But for technical guys who actually are interested in finding out the differences and are interested in making the choice, like should I use GRE, should I use Geneve, should I use VXLAN? I've provided some summary information in my slide deck and I provided a link at the bottom where if you click it, there's tons and tons of more information. I apologize in advance that I could not make them public public, but I promise I will make all those links public for the non-Red Hat people in the room or people who are going to see this later on. I will make completely public links available. Right now they are public or they are open to all Red Hat employees, but as soon as I go back, I will blog. Some of them we create, some of the write-ups are so nice that we can blog them as is. So we will just blog them and I'll replace the links in my slide deck or send an update about it. But if I can't blog them, I will try to make a Google Doc or something which is visible upstream. So point is that for the Red Hat employees, all the links should work. For non-Red Hat employees, I will make them public within a week or so, hopefully. So now I'm gonna switch gears a little bit back and this is now something to think about, something to evaluate and I just want to throw it out there. I think it's important to me. This is the part of the presentation where you can get ready to roll your eyes. Okay. Now the bullets are coming from the top. Did you notice that? Switching gears, get it? From the bottom, now the top, no? Nobody noticed, that's okay. So what is happening is that others are now trying to disrupt us. I was talking to Werner Noblek, he's one of the very high-up sales guys. I think he is in charge of all sales outside North America. Very smart guy, I talked to him on Monday. He said very interesting thing that I had never thought about it. He says, Rashid, the open-source game and the Red Hat game and all of that has been done there, been there, done that, established, et cetera. We open-source guys and we Red Hat guys were trying to disrupt the industry and trying to disrupt the status quo. We have done that, we have succeeded that, we disrupted us. We have done it such a nice job that now other companies that you know better than me are trying to disrupt us or trying to disrupt the open-source philosophy or trying to disrupt the Red Hat philosophy. So we, sorry about that, I don't wanna make somebody blind with my laser pointer. So what is happening is that we can't slack off. You know, now that we are established and earlier KB was saying now that we are established, people are relying on us, people are depending on us. It's not like a new version came, the drivers broke, sucks to be you, please put a patch up to fix your drivers. No, no, no, that's been there, it's old news. Now we really need to CI CD this stuff. We have to make it work. It has to be stable, it has to be guaranteed. Like really guaranteed to make it work. Imagine if you're some relatives BP measuring machine, I'm just taking a blood pressure example, but it can be more complicated than that. In IoT devices, if the healthcare industry starts using Linux and you don't want your BP measuring machine or even more critical path than that to stop working because there was an update somewhere, right? So for IoT devices, for your self-driving cars and other stuff, things are getting very, very critical path. So we have to be careful about this. The other thing my request is for all the developers and kernel hackers and everybody, there is life outside the mailing list. Yes, there is. Absolutely, I've been there on both sides. So, inventing something new, inventing patches accepted upstream is now just the beginning. Think about it for a second, seriously. What used to be is like I have patches, I'm going to work very hard, I'm going to talk to people, I'm going to solicit their opinion, I'm going to send an RFC, I'm going to get patches accepted upstream, I'm going to send patches, I'm going to get people's feedback, I'm going to work on all that feedback and yay, my patches got accepted, applied, David Miller, thanks. So all that email came awesome, but that is now just the beginning. What do I mean by that? There's an ecosystem that we have to worry about. Your patches, if there are new patches or a new thing that you invented got introduced, then we have to think about, okay, how does it fit into the giant ecosystem? I'm not talking about the business side of things. This is not a business conference, this is not a marketing conference, so I'm not talking about that, please don't get me wrong. I'm really talking about the technical aspect of this. I don't want to give an example to offend someone, but I might offend someone, so I'm thinking of, let's say you invented, you invented an eraser, right? So 90% of the world, oh, I have a prop stuck in mine. Let's imagine this is a pencil. 99 of the world's pencils look like this. Let's say you invent an eraser which is square, fine. You needed a square eraser, awesome. You have very good reasons to invent a square eraser. What you have to think about is, how will it fit on the 99% of the world's pencils? So there's an ecosystem that we have to worry about. If you invent something new, that's fine. If 99% of our customers and consumers are using the round pencil, great. Your square eraser is awesome, but how does it fit on a round pencil? So we need to think about the ecosystem. How does it package? How does it get released? All of that stuff. And if your invention or if your patches are awesome and the packages are awesome, great. I'm not saying we should stop innovating, but then after the patches are accepted, we have to do some advertising. Again, outside the mailing list. There are seven billion people in the world, right? Most of them are not on mailing lists. It might be a surprise to some people, what? No, yes, most of them are not on mailing list. So it's not enough to say, just look at the patches and how it works, great, it's awesome. Your patches are awesome, no doubt about it. But we have to advertise it off to the people who are going to use it, consumers. Again, not talking about customers. Okay, let's talk about it a little bit more. I think another bullet will come. So another reason that people in the past have been putting out something new, brand new, instead of using more or enhancing the same is that there is a glory aspect associated with a maintainer. Yes, there is, no doubt about it. I invent a square eraser. I'm the maintainer of the square eraser. And then from there on, there is a glory aspect to it. But in my opinion, there is just as much glory as a maintainer as a contributor. So if you take the round pencil and you contribute towards making that round pencil better, I think there is just as much glory. By the way, maintainers might look like a very awesome job, but once you are a maintainer, then you know how much work it is also. And then there are people like me saying, have you done, don't worry about the maintainers. So you have to think about that as well. So bottom line of this slide is that getting the patches accepted upstream for something brand new that you might have invented is just the beginning. There's a lot more. We have to think about the ecosystem. We have to think about the advertisement. We have to think about the integration into the upper layers. And that is super important as well. So something to ponder for future development. And this is, believe me, I'm not preaching just to my team. So please guys, my team members don't think that I'm just right. I made this slide of the last two slides for you. It's really for the Linux community at large, everybody. Some things to ponder for future development. So enhance versus redo versus reinvent. I gave an example. Should I enhance the round pencil or should I invent a new pencil or should I invent a new eraser? Something to think about. In some cases, it might make sense. The examples that I gave of some packages, it absolutely makes sense to invent something new. The last one was not enough. But in some cases, it didn't really make sense to have something new. Did it? Maybe. We have to think about our customers and consumers. Thank you. I have 15 minutes left, which is good. I'm almost done. We have to think about our customers and consumers. Again, please don't get me wrong. I'm not talking about our customers. Oh, Rashid, you are telling us to worry about the paying customers only. What about the free guy? What about the chemistry lab in Kathmandu? Yes, absolutely. I have to worry about the developer or the poor master's thesis guy in Kathmandu University as well. But he's your consumer. For me, honestly, the big giant IBMs of the world are just as much of a customer of mine as they are the one master's thesis guy in Kathmandu University or whichever university. So the consumer and customer matter also. Existing entrenchment matters. Coming back to the pencil, right? We have to think about what is already happening. Please participate in RFC. So if you see an RFC around your area and somebody trying to say, hey, I'm gonna invent something new, awesome, please participate in that. Please give feedback and say, have you thought about it? So that if there are corner conditions or if the guy is trying to paint himself in a corner and not flexible for long term, you can point that out saying, this header, you've only put 32 bits. What about 64 bits? What about future considerations? Please give that feedback. It shouldn't be like, oh, I don't care. It's not one of my business, 32 is fine. It works, that's it. No. Then later on somebody will invent something new which takes the 64-bit header. Maybe we should have avoided that. Suggest flexibility for future. I already talked about it. Be a rock star. So credit to Tim. How many people attended Tim's keynote last year? Cool. So Tim sitting here, I didn't know he was gonna be here. But to give him credit, he had a good presentation last year. You can look it up. He said, be a rock star. So trademark Tim Burke from last year's conference. So he said, be a rock star. Be a contributor. Be a contributor. Be an enforcer, positive influence. Don't be a troll. And this is sort of part of that. Oh, my package. I'm gonna invent something new anyways and I'm gonna not. So be a rock star. Please look at his presentation. I think it gave a lot of thought, good things to think about. Get feedback before doing patches. So all of us doesn't matter. Red Hat, non-Red Hat. We have people who take our code and make that useful. And these people are TAMs, technical account managers, solution architects, GSS, all of these people. They're going to take our code and patches and then make it useful. They are in the trenches every day. They are the people who get yelled at. They're the people who have to say, you have to encrypt every bit and then they have to go and answer, why don't you have to encrypt every bit? Or what is the drawback of encrypting every bit? These are the guys who have to make it real. These are the guys who have to make our solutions acceptable. So are we solving the problems that they are seeing or are we solving the problems that we are seeing? It wouldn't be great if we solve the problems that we both of us are seeing or at least solving the problems of our customers. So let's solicit information or ideas or feedback from solution architects and GSS and TAMs. Again, contributor can share as much glory as the maintainer. That's it. So I did want to intentionally leave a lot of time for questions. I can go back to the technical slides if you want and answer your questions or there are enough people who are much, much smarter than me in the room who can answer your questions in details but questions, comments, yelling, Advil, coffee, all welcome, come on. Okay, for the first question there's an extra party ticket. It has to be a good question, yes. Excellent question, excellent question, excellent question. Why is there an octopus? So the network manager is trying, if you look closely, it's taking care of a laptop, a server, a phone, ethernet cable and it's wearing a Bluetooth in the ear. So network manager is managing or taking care of or controlling all these devices. So we needed a graphical cartoonish representation of that and this is what came out of it. Your party ticket is with me. Please feel free, come down. Anything else? So yes, yes, please. Yes, so a very good question. Network manager guys who are sitting here, please correct me if I say something incorrect. Repeat the question. Yes, yes. The first question was in the back was why does the network manager sticker have an octopus? And I think I answered that. The next question is on small devices are there contentions on resources, footprint, RAM, et cetera, et cetera. Very good thing about DEF CONFIS that the hallway track and the track come in the tram interactions and the party interactions are awesome. That's why I come here every year. That the most important track is the hallway track as they say. Two years ago or three years ago, I was just in the tram with Matthew Miller. Just happened to be here. I didn't even know the guy. And he says, Rashid, everything is awesome with network manager, but it just stays on forever. I want it to shut down, config and shut down. There's a term for it, config and quit. Config and quit and make it smaller. So what we did was we worked on it and what we did was we made it modular and a la carte. So if your server, for example, doesn't have a Bluetooth, then it won't fire up the Bluetooth. And earlier on, it was bringing on the Bluetooth and everything all in one. Servers don't have Bluetooth, so you can shut off the Bluetooth. Summarizing the answer. It is much more modular now. It really can be a tiny footprint. So you can turn off so many things that you just get the bare minimum that you need. And if you have more questions, there are network manager guys who will be more than welcome to answer it. But a short answer to the long thing is much more modular. We haven't seen any complaints about it recently about it being too fat, I can't use it. As you see, Raspberry Pi is using it. Sum. Is it going to be running in it RAM FS? Okay. Good point. I will consider that for as a future enhancement. And I mean it, seriously, I mean it. This is the kind of stuff that I come here to get feedback on, on all the worst stuff that we work on and we incorporate it. And this sticker is a proof of that. As I said, this came at DevCon, this idea came last year. Anything else? Yes, sir. You guys ask a couple of things. So let me just first say, DPDK is supported by Red Hat, full-time. We have people dedicated to it. We have resources assigned to it. We are hardening it. We are contributing it upstream. We are full-fledged behind it. We are full-fledged supporting DPDK. For OpenVswitch, for inside the VMs and on the host. It's part of our solution, supported solution. Then you ask, is it Linux, not Linux? Yes, it is. Some people, I can argue both ways, but I can argue easily it's not Linux. But we do support it. It is accepted upstream. There is no fork of the code anymore. We collaborate with the Intel's of the world and the six wins of the world, and we supply patches. We provide feedback on patches. We are trying to enhance it, optimize it. Networking team is working on it. Virtualization team is working on it. There are people dedicated upstream on it. So yes. Yes, sir. In the back, my partner in crime. The question is, are there, summary of the question is, are there plans for long-term support or long, LTS, excuse me, LTS release of OBS? Yes, absolutely. There are upstream, from right from upstream, Nigeria, VMware, there are now LTS releases. So we always were waiting for that. We helped them establish that. We helped with the testing. Now they have LTS releases of OBS. Great. We will support them. We will latch on to them. We will have to work with the PMs, et cetera, to make them part of our products, whichever. But the specific example that you're talking about to go from one to another, it wasn't really about LTS. It was more about our understanding on how OBS, did I do something? We learned a lot from that experience and we are going to make sure that it doesn't happen. So we did, it was a good learning exercise. I know it causes some problems, but we learned a lot from it and we are going to make sure that the startup, it was a change in startup to fix a bug. But we learned a lot from it and we'll make sure that it doesn't happen. Or try as much as we can. Hopefully it doesn't happen. How are we doing on time? Okay, last question. I don't have any prizes to give. But seriously, please take a postcard. But there are other sessions about networking back to back almost. Feel free to come back or stay. There's good power, there's good Wi-Fi in this room. There's coffee outside. Okay, thank you very much.