 Welcome everybody. We have about 41 people on the call, which is awesome. Thanks for joining. I hope we can live up to your expectations for this session. What we have done is instead of having a big joint lecture or lecture kind of a thing where we just talk about the problems and what the solutions might be, we would like to hear more from you and maybe share whatever we know as per your questions. So what we have designed for this session is that I will just do a quick intro and then we have many of these people whose names you see on the slide who are subject matter experts in their area. They will talk a little bit about their area and how to debug and what they see with the telco and other things happening. And then we will leave enough time in the end for questions and answers. And that way we are targeting what your problems are and what you see. And we go from there. So for introductions, my name is Ransheet Khan. I help out with the networking team in REL and an OpenShift and work closely with other people like Neutron team and other stuff. You might have heard of me being here at Red Hat for about nine years. And recently I've been spending a lot of time with telco. I've been working towards one large telco deployment and making sure things are working out fine over there. And I completely understand and agree that debugging this, debugging stuff for telco is becoming more and more of a problem. And this is a slide that usually Marco from the support team he shares that we have created a system which is awesome. But it looks more and more like a cockpit of a space shuttle. He started off with a cockpit of a 747 Boeing 747. But now he has upgraded his slide to have a cockpit of the space shuttle. And debugging this requires a lot of, debugging this requires a lot of skills. We understand that we appreciate it and we are trying to solve it. So what we want to do is we have a panel of experts, they will go over their areas of expertise. As I mentioned in TCP, NetFilter, OBS, DPTK, OVN, Network Manager, NM State, and EBTF-XDP. So we'll try to give you a preview about all of this stuff. And we wanted to also tell you that more help is on the way. We have been talking about helping with debugability, observability for a very, very long time. But I'm happy to report that now there is an open source project, which is coming top down or bottom up. It depends on how you look at it from open shift side and all the way down to rel. And we have our team members, one team member fully dedicated to it. And other people are helping part time to that effort. In addition to that, there is a dedicated team of people and architects and stuff on the open shift side. And I think everybody will benefit from that project and they have plans. And we can tell you more about it if you are interested of what they plan to do. More important to that is that if you have a wish list of, hey, I wish I could debug this such and such way, or I was debugging it, I had a problem and I couldn't find a solution for such and such thing. Please feel free to reach out to any one of us. We will put it on the wish list and we'll work with you to try to solve the problem and make it part of this network observability project. With that said, I will give the mic to Paolo. And Paolo can tell us more about TCP, UDP and other things. Go ahead. Hello. So let's have a look first at call network protocols. The good thing about UDP and TCP especially is that they are quite stable. And we don't usually see functional issues with them, but we can still observe what is reported quite a few of performance related problems. MP TCP is a quite different beast instead because it's very, very new. It's just rounded in the new major release of rail and even upstream is quite new. So it's a completely different garden. We had a specific presentation about MP TCP and hands on it. And I suggest all the interested people to have a look at it. It's linked in the slides. Regarding TCP, especially UDP and UDP for working part, we are reported more often than not some performance issues, performance not up to the user expectations. And it's not always simple to guess the possible root calls, especially when the network deployment is extremely complex like a containerized one and the workload around on top of it is unknown and huge. And now there is a great tool that is bounded out of the kernel binaries, which is called path, which can be used in a very handy way. And it's very useful to detect bottlenecks in the, even in live system, even in system running critical tasks, the tool can be executed with very limited impact on the system itself and the in execution workload and collect data and can be analyzed offline and can easily identify which part of the kernel is spending most CPU cycles. In the case of UDP for working part, that usually is due to the lack of backing, that is lack of GRO, GSO for the forwarding packets, forwarded packets. And that is a situation which has no easy solution, no silver bullet, but things are improving in that area recently because a new set of GRO functionality for forwarding packets has been introduced in the upstream kernel and the portal to Realm, and we are in the role to support that even for UDP encapsulated traffic on top of UDP tunnel. And that's it for UDP TSP and PTSP. So I will give the speaker the microphone to the next presenter. Thanks Paolo. And for the audience, everybody will be available to answer your questions. So Paolo is not going away. He's going to be around in the end also if you have specific questions in this area. Moving along to Netfilter, Eric. Hello. So I have a few things here on Netfilter. First topic is NF Tables. This first bullet here is really just kind of a primer about NF Tables in general. This is on the Red Hat blog. Notably, NF Tables is where we are spending most of our development effort now in regards to firewalling. So that's why it's highlighted here. Some of the major advantages or reasons for that are because we can do combined IPv4 and IPv6 processing. I think that's one of the important points in regards to telco. The other big benefit would be it has a pretty efficient set in map implementation. One of the more powerful things is the concept of verdict maps where you can... It's basically a giant set that you can match a certain IP address or something and then you can drop or reject based on that. And that's all based on the set. So it's a single rule in the rule set, but it's a set that you can add to dynamically that determines what actually happens to the traffic. In regards for debugging NF Tables, NF Tables has a pretty good tracing tool and how it works is you add a rule in the rule set that sets this meta NF trace set one. And the cool thing about this is I can qualify this with packet criteria so I can match a certain IP address or a certain TCP port and enable tracing only for a subset of traffic. And then when I have that rule set, I can just run an empty monitor and see all the events and events you can see are like packet drops, jumps to different trains or packets that are being accepted. So it's pretty cool tool for debugging a rule set. Second item on the list here is IP Tables and the only bullet point here I have is basically migrating NF Tables if you can when you can. This link here is actually a blog or I believe it's an access article about how to go about doing that migration. The short summary is there is a tool called IP Tables Translate that is specifically for, you know, you feed in IP Tables rules and it tries to spit out a NF Tables equivalent rule. Last item here is firewall D. The notable thing in regards to telco is the last item there is the concept of policy objects was just recently implemented upstream. This basically just adds out full output and forward filtering to firewall D. For debugging firewall D, there's a global option called set log denied that just you can use to basically catch anywhere a packet gets dropped. It can go further down to just multicast or broadcast or whatever but in general you can just log anytime and that just goes to D message log. Alternatively, firewall D is backed by NF Tables so you can go use full NF Tables tracing if you really want to and that's what we as developers often do. So yeah, that's all I have about NetFilter. Thanks Eric, appreciate it. You're up next. Hello, I'm going to talk a little bit about OVS and when I say OVS here I mean the DPDK accelerated open V-switch which is that data path that runs entirely in user space. So let's go to the first bullet in features. This is a nice one that improves performance which is important for telco deployments. Well, basically the network cards today they are capable of receiving tens or hundreds of million packets per second and that is too much to process on a single CPU card. So the network card helps to scale in this by applying a hashing algorithm to each receiving packet and then distribute the packets into queues and then we can create user space threads to process those queues. So we have this nice parallel distribution of the workload. However, depending on the receiving traffic pattern we might have a thread that is really busy while others are not doing much. So this first bullet here is about enabling open V-switch to detect that situation, that unbalanced workload situation and redistribute the queues or ports to try to make the load balance between the threads again. This is important for two things. One is that it avoids the throughput issue when the CPU becomes the bottleneck and also to avoid this previous packet drops because as the thread approach the maximum processing capacity the chances of dropping packets increases. And then let's go now for the second bullet, the user space TCP segmentation of load. It's also known as TSO. So in this case it's also to improve performance. The idea here is that instead of the virtual machine sends 20, 30 or 40 TCP packets and then to the data path and then the data path has to process all these packets and then finally forward to the network card. We offload these to the final stage which is the network card. So the virtual machine can send one big packet up to 64 kilobytes and then this packet will go, just one packet will go through the data path. It's a lot cheaper to process one packet and then we leave to the network card to split these packets into regular network sizes. So saves a lot of CPU cycles here. We are talking about three times more performance, five times more performance, or depending on how you connect things it can go up to eight times more performance. Now moving to debugging tools I selected here three tools which I think are essential to know if you are working with OVS. The first one is tracing packets inside of OpenVisWitch. It's a programmable data path and the flow tables is becoming bigger and complex to follow. This tool allows you to provide a packet and then trace what would happen inside of the data path. So it's really useful to understand what's going on. The next tool is about logging. OVS provides a logging facility which is really interesting because you can set different levels for different pieces of OVS. So if you are more interested in a specific area you can enable the debugging tools on that area. Other areas you can even hide messages or increase the log message level to hide those messages. When you enable too much messages there are performance impact but doing so you still can see the log messages without having to rebuild or changing software at the production side. The last bullet is about showing coverage counters. These counters are basically event counters. OVS keeps a large set of event counters and when you use that common you will see the rate of changes over the last seconds or the last minute or even in the last hour. So it's nice to understand to see what's going on with OVS with those comments. I think I covered all the slides. That's what I had for open decision. Hi everyone. I'm just going to come talk to you about OVN now. So OVN, I'll throw this more in the slide about the whole debugging and debugging aids as opposed to say relevance to telco because in general OVN is a method for describing logical networks and the result of OVN's output is OVS. And so OVS then actually takes care of any of the data plane things that OVN programs. So therefore everything that you use OVS for you can use in an OVN deployment as well. You just might have multiple servers running OVS or multiple containers running OVS as a result. But what you end up with when you use OVN is that there may be some sort of questions linking what you see from OVS versus what you initially programmed into OVN. So one of the things you might find yourself doing is saying, you know, oh, I can't ping between these two VMs. So why is that going wrong? Well, we have a few tools that can help you to debug connectivity. So first things first, one of the things you can do with OVN that's a well-documented feature is listing logical flows that OVN creates. But it can be sometimes hard to link those to the open flow that OVS ends up creating. So with these options here, given to the ELFLO list command for OVN SBCTL, specifically this hyphen hyphen vflow and hyphen hyphen OVS, it'll actually show the resulting OVS open flow flows that get created based off of logical flows in OVN. So that's sort of a step one you can use. Another thing you can do is use something called OVN Trace. OVN Trace is a really cool program that will allow for you to describe a simulated packet and put it, you know, into an OVN logical data path. And the OVN Trace will then show what logical flows get hit by that packet and where the packet will end up going in the logical network. So you can see, for instance, if you've defined, say, an ACL that drops a packet that you didn't need to do or something like that. And then finally, you saw Flavio talked on the previous slide about the OVS app control OF Proto Trace. Well, you can actually link that to a program in OVN called OVN Dtrace. And what that will do is it'll take the packet that you simulated going through OVS and it'll actually translate that into the logical flows that OVN programs. So you can sort of see which logical switches or logical routers are being traversed by the packet. So in addition to connectivity commands, what you may find is that status commands are useful to have because OVN can spread across multiple servers and if you're using multiple database connected with rafts then knowing what the current status of the system is is pretty useful. So here are just a couple of commands that you can use. The first one is just for OVN controller. All it does is just simply tell you if it's connected to the southbound database or not, but it can be a lifesaver if you're trying, you know, bang your head against the wall, trying to figure out why something is going wrong. You run that and find out, oh, it's not connected. Oh, that's all. And then the same thing for an OVS DB server, you can check the cluster status for raft to try to figure out say who the current leader is, what the availability of the various servers, et cetera. So hopefully that's helpful for you guys if you're looking to debug OVN. And now I'm going to turn it over to the next slide. Hello, I'm Thomas. I'm shortly talking about Network Manager. Network Manager is the configuration tool for the network on rail and distribution. So what Network Manager does is to configure the network, but I think the most important job that it has is to provide an API for other tools to allow them to do that. In that sense, it also allows those tools to integrate with each other because they use the same underlying configuration primitives. Network Manager's API is all about profiles. Profiles are a bunch of settings or descriptive configuration for the network. So you create those profiles and you activate them. Consequently, when you want to look at what is Network Manager doing, you look at the profiles you have and which are active. And you can do that with NMCLI, which is Network Manager's command line tool. You can say NMCLI Connection to list all profiles and you can call NMCLI Device to see which are active. But in general, when debugging a network issue, I prefer to look at the lower level first because Network Manager is only the component that configures the system. So if you cannot reach an IP address or you cannot resolve a name, then it's not Network Manager doing that. It's configured in your system, so I would look at the IP addresses that I have and how is the resolve configured or how it also can configure OVS. So I would look how is OVS actually configured. And then I might see, oh, there is no IP address. And in the second step, I would ask what did Network Manager do? Why is it not as I needed? Or what do I need to change? So I find it more helpful to first look what's the problem. But in general, for really debugging Network Manager, you always need to look at the log file. And unfortunately, the log file by default is not very useful because otherwise it would be much too verbose. So for debugging, you actually need to increase the logging verbosity. And you do that in the configuration file of Network Manager. And then look at this log. Yes, I think that's it. I hand over to the next slide. Thank you. Hi, Kai, this is Grace. Yeah, I'm talking about Engumstay for how you use Bing Tech and how to debug. Engumstay currently is providing the API in a describable way so that you can acquire your network in a YAML or JSON format. Then you apply the change to desired states to us. And Engumstay will create a checkpoint to make sure your desired states is actually applied to the kernel and user space. If not, we roll back. We also provide context which allow you to automatically roll back to all states afterwards after a timeout. That means you can try any dangerous network settings without ordering your news collections to this server. Engumstay is used in many projects, like OpenShift, Over, and BDSM. It's here, it's there. And when you have several issues of them, I think you probably need to gather logs of those projects. And for Engumstay to debug the issues, the debug log is sufficient. And in the Python API, you can just define the log level into log in debug. And that's it. And you will get the log in a standard hour in a way you define the log in facility. And the command line also providing a debug by default in the standard arrow so you can see the issues. The valid interface of Engumstay also includes the debug messages. And Engumstay is using natural manager as the default backends. So providing the tracking level logs of natural manager also helps to reproduce and debug the issues. And Engumstay is for plugins. That means you can use whatever backends you want as long as you provide a plugin to Engumstay. That's it. Thank you. Yeah, EVPF and XDP. I'm not sure that everyone here is familiar with the EVPF concept, so let me spend a few words about what it is. So EVPF is actually a custom bytecode that you can upload to the kernel and attach to certain events in the kernel. And whenever those events happen, your program is executed. Of course, there are some safeguards, so the program is verified for safety. So it has to be proven safe before it is accepted by the kernel. But yeah, in general, this is quite powerful concept. One example of such event that you can attach your program to is reception of a networking packet. And this is called XDP and actually allows manipulation with the packet and passing of the packet and so on before even the kernel stack sees it. So this obviously opens some nice opportunities for acceleration of various common tasks. So one typical example would be filtering some kind of fireballing. This might be especially useful for distributed denial of service protection where other software can analyze traffic and when it finds certain patterns that should be brought, it can configure accordingly a BPF program and will just drop these packets just as soon as they are received from the hardware, basically eliminating much of the overhead of the kernel working stack. Another use case might be load balancing. Again, as soon as packet is received it might be inspected and redirected elsewhere. So those are some interesting concepts. In the future there are probably even more opportunities than this but those need some work in the kernel first. We might think about offloading even parts of the packet switching maybe offloading parts of smart or SDN switches to XDP. So this is a bit different to what everyone was or has been used to. Namely, as I said, with XDP the packets are received by the program and can be dropped before even the kernel working stack sees them which means that the usual traditional tools like TCP-DOM and Warshark they just don't see the packets or if an XDP program modifies a packet well, those tools see the packet after the modifications because they are part of the kernel networking. Luckily our colleague Elko developed a nice tool called XDP-DOM which is able to observe the packets before XDP programs see them which actually allows you to see what's going on whether the packets are dropped by the XDP program or modified by XDP program or whatever happens with them. So this is probably one of the most important things while debugging XDP that I would highlight and a tool that would prevent many surprises. Now there's also a BPF tool that's something that's been around for quite a while is getting more and more functions and features for networking the probably most relevant part is BPF tool net which will helpfully dump all BPF programs attached or that are related to networking attached to networking data path. Now it will only list the programs with the identifiers there are other comments in BPF tool that are able to dump the whole programs so you can actually get the program inspect it and that means both the bytecode and the cheated machine code of the program because obviously BPF does not interpret the bytecode it is cheated into the native machine code so you can dump those programs too it's not that easy to go through assembly or bytecode of course so there's still some work ahead of us to improve this experience but yeah I think the basic tools to explore what's going on in XDP and BPF networking it's in place already. Awesome thank you so much I really have to admire the coordination and clockwork that we did over here because we had planned for half hour of just three presentations and then we're remaining for question and answers kind of things so we will open up the forum for question and answers people can ask questions through the question and answer tab of Hopin or feel free to ask it through the audio channel I don't know if you can ask the audio channel but feel free to ask through chat or Q&A tabs of the Hopin session we are all here to answer any of the questions that you might have or share with us what is your biggest debugging problem that you face and maybe we can we have some clues of how to help you right now or if you can't then maybe we can put it on our to-do list and provide a solution in the near future when I had envisioned this session it was more of a bird of a feather kind of a session where people would come and we would help you out by having our experts available and answering your questions so please don't be shy all questions are good questions I see some questions in Q&A from the Fresno ice recovery forum to me do you recommend system tab or eBPF based tools that's not a question so one thing I didn't mention it's not only networking related it can be used also for other purposes like tracing so basically you can attach BPA programs in place of some trace points or even like function entry or exit and you can have multiple BPA programs attached to multiple points and coordinate the tracing between them which allows you to do things like following a certain structure for example, working back at through the various points in the kernel now I would definitely recommend BPA tracing because it's safer system tab works in a way that you need to compile the kernel module, load it there are no safeguards there you need to have the development environment for compiling those programs so this should be much easier with tools like BPA trace on the other hand I have to admit that those tools are still under development and you might encounter some bugs sometimes so I would probably default to BPA trace and fall back to system tab if I run into some issues so the next question is a very broad question which is a good question Till, thanks how do we debug what kind of bugs do you find in the telco deployment this is such a broad question that I would request Dan Williams to answer it from a shift containerized platform perspective and then I'll request Flavio to answer it from OpenStack-ish perspective so Dan if you could start please yes, can you hear me yeah so a recent customer I'll just go through two of them actually one of them turned out to be out of date network interface card firmware that was dropping a certain size of packets and we only figured that out by having a lot of wire shark dumps of specific traffic through the cluster so all of the things that we've talked about are great and very necessary but you also need to go back to the basics sometimes and just look at packet captures and so those wire shark dumps from multiple interfaces both physical and virtual ones on the machine itself and also dumps from the top of rack switch that the machines were connected to allowed us to narrow down the packet drops to somewhere between the kernel driver and the card and then we figured out by using ETH tool and getting some other info about the card firmware that it was out of date upgraded that firmware and everything started working another situation we had was where we saw some very odd TCP streams and those we again used wire shark to identify those but then we used other tools like nstat, netstat and some other of those kind of core much lower level tools that inspect the kernel TCP and UDP stacks to figure out that it was actually a problem in a partner or vendor program that was misusing some of the options for TCP sockets so those are two examples of specific telco problems that we were able to successfully debug but we used the normal tools that a lot of people are probably familiar with wire shark some of the lower level things as well and were able to successfully resolve those issues with those tools and we didn't even for these escalations we started at higher levels with things like OVS and OVN but pretty quickly determined that those were actually not the problem that it was these kind of lower level things with the NIC hardware and some of the vendor programs would you like to tell us from your experience yeah well over the switch can be used in the kernel data path and also with the users based data path so some configurations that don't apply applies to the users based data path and it's a little bit different that people are used to so you have to reserve CPUs you have to reserve memory, dedicate CPUs and memory for that specific purpose and one of the things that we found is sometimes some tweaking here and there can cause problems and using the v-switch d-logs and also the SOS report so there is this tool OVS which regenerates a report and it captures a lot of data on the roll host so you can look and review what's going on offline that's very useful not only for OVS but the whole system and still the logs of OVS are there there is a dump of the database there and we could check and find out configuration problems the other situation was like learning on the fly so we had an experienced backup drop it's not predictable it took us some time to troubleshoot that and we found that it was due to locking contention and we had to use some instrumented code back then but then we were able to add an event counter to this particular situation so if it happens again we can easily look at the statistics and figure out what the problem is so I mean we have a large set of tools but sometimes we need to be a little bit creative to find out and once we find out we try to increase or try to make it visible from the outside so we don't need to go over that again let's see of course there are it's a programmable data path it's becoming harder to follow up what's going on we have brought the trace also helps us to understand what's going on so at one deployment we couldn't see the packets going to a port we were expecting a packet there but it wasn't there so and if you start doing TCP dump you have to troubleshoot many software devices sometimes going inside of network name spaces and what not so it becomes a tedious process especially if it's in a remote deployment system well knowing about the packet we could use the fbrototrace and find out what's going on with it it was matching on different fields than expected and we were able to find out what's going on so it saves a lot of time so I used the tracing tool instead of going to the usual path of attaching TCP dumps on all the interesting interfaces thanks Flavio thanks Dan one question that came from the site was do we have any other tools that show complete network diagram or at least a static view of the network diagram and yes we have a tool for that purpose, jury bends it out of necessity and it's been pretty popular and pretty helpful would you like to tell us more about plotnet config please so it's a tool that is mostly targeted for developers and system integrators like developers not of just kernel developers of some low level solution of deploying some complicated networking setups and what it does is it scans the machine and discovers all the networking namespaces and all the interfaces and it talks to OBS and finds what interfaces OBS knows about it has some dpdk support and so on so it basically examines the machine and so draws diagram or plot of all interfaces namespaces and their interconnections awesome maybe we can put a link in the chat jury for plotnet config up to three so next question is from Thomas Haller who's asking is OBS common in telco more general what's special about networking in telco environments again a broad question I'll try to attempt to answer it and other people can fill in so OBS is not only required for telco it's a component for all of our layered products right now so OpenShift which is the container platform and OpenStack which is the virtualization platform both of them use OBS and Rev also to a certain extent also uses OBS so OBS is the switching platform for within the server switching and it ties into the rest of the cluster as well through OpenFlow and SDN etc etc so it leverages quite nicely and that is our default solution for packet movement within the server and a little bit beyond so OBS is being used in our Red Hat product and then if one of the telcos or one of the big banks or any one of our big customers wants our solution especially our layered product solutions then they by default use OVN for packet movement there are other things which are like SRIOV, Direct Connect and a little bit of Clinix Bridge is still being used but predominantly we see packets flowing through OBS and the next question is more general what is special about networking in telco environments? yeah that's a good question so what's special about them is most of the time they require low latency and there are telco environments are very different also the requirements for core are more for packet aggregation and packet transport between their large data centers so they might be looking at 25 gigs or bonded 25 gigalinks or even 100 gigalinks and they want fast packet throughput low latency movement then as you go towards the edge which is maybe the base station of a large antenna, cellular tower something like that then the requirements change a little bit because then they require things like precision time protocol, PTP ultra low latency maybe real time kernel to make it predictable and then you go to really far edge and the requirements are definitely about real time with precision timing and connection to GPS satellites for synchronization of packets and time stamping etc so in a general sense the requirements from telcos are a little bit varied and we see SRIOV still being deployed we see DPDK on the software path and slowly slowly we are getting more requirements for encryption and believe it or not still a lot of packets are flowing in the tier and that is a security threat because of hackers and other things so at least the control packets should be encrypted so IPsec or encryption packet drops packet forwarding, switching all of these are required by that so some good questions have come in so let me try to address them so till mass is asking OBS bonding will it replace kernel bonding or team D so again we don't see that much traction with team driver some people are interested in it and we are evaluating it to see how many people are still interested in it or should we try to deprecate it or at least put it in maintenance mode because my usual philosophy is when we have two solutions we have no solution well I learned it from Bill Nottingham who used to be part of the kernel team and now is part of the Ansible team so try to minimize confusion try to minimize solutions is one of the things that we strive for is so team driver we have a question mark on its future but the customers are definitely love kernel bonding they know it, they love it they know how it works but in some cases they are leaning towards OBS bonding as well so I think we will I mean my point of view or my prediction is that we will continue to see kernel bonding and OBS bonding for the foreseeable future but I don't know much but I don't know what the future of team driver is going to be one interesting to add here is that in user space data path there is only OBS bound yes good point about user space we are adding that next question is from Heidi we have been supporting projects to improve EPPF to support better debugging and research please contact me if you are interested that's more of a statement yes we are working on multiple fronts on EPPF XDP and especially on EPPF debugging feel free to contact us or ID and we are working in the research group or the CTO's office in hardware enablement team and in the networking team and I hear even on IBM CTO's office and research team so there is a lot of activity about EPPF and particularly about debugging and tracing it is part of the REL REL releases now it is GA with REL but there are some caveats like because it's so powerful we did not want to ship it without root privileges so you will have to you will have to have root privileges to be able to use some of this which should be okay go ahead it should be noted that we already have some working relationship in the EPPF and XDP area with universities yes Pobrower and talk Evalent both of those are XDP developers they are working extensively with some universities so this is already taking place which is great I think okay awesome so when Liang asked a question about command line versus that it has been answered already so which I think the general answer is when it depends on some of them are command line a lot of them are based on command line some of them like plot net config is visual aid I think the tool that is being developed through open shift it does have a user user interface component like a GUI component and it's going to have things like nodes and network diagrams and double click to find out more information and color coding if a node is down that's what I hear I don't know the particulars but there are different tools for different purposes but predominantly right now as with everything else mostly they are based on command line the question was specifically about OVN tries it seems to me okay so there's still 40 plus people on the call I am sure you have other questions so please don't be shy we have 10 more minutes we are trying to post the slides but slides should be posted shortly and there are links in there that you can use for further exploration and you have the names of all of our names on the first slide please feel free to reach out directly or indirectly and if you have questions afterwards also and we will try our best to help resolve them anybody else has any questions is there any activity in the chat I haven't been looking at that I did not know I was echoing badly sorry about that is there a question I am hearing fine sorry if I was caught in causing the echo sorry sure Gris is requesting that maybe we can add a slide for the contact information I'll do that if LaVue hasn't uploaded already then maybe we should add that slide sure good idea Gris some people are getting an echo please for that others are not I don't have a double connection I am not connected twice I am not sure where the echo is coming from I see Jesper has joined also Jesper would you like to say something about eppfxdp with regards to debugging at least it's okay if you fine I didn't need to put you on the spot but you work in eppfxdp a lot also you just wanted to give you a chance yeah can you hear me now yes we can yeah I was I just had some chats with people and people are asking how can we get the features like we just ship a red wheel kernel right but we backport a lot of features and people don't understand what kind of BPF features are available in different kernel releases so I can answer the question myself because I've had this chat with people but there's actually the BPF tool have a sub-command called features and you can actually use that to dump the different features but then people came back to me and said but then I have to install Red Hat first to dump the features can't you provide these feature dumps per release so I'm thinking maybe we should do that so was this asked inside the open or was it on an external this was on I think it was some slack channel I think it's the Celium slack channel but it wasn't related to but I had two different one yesterday and one today asking and misunderstandings saying oh we cannot do this on rails and yes we can we can actually do it on rails yeah we have a list of the types of programs that we support with releases but I guess they are asking for more like even what helpers are available or what's even more general features right? yeah and they wanted some web page they could access what the features do we have then I don't think so and honestly with the speed of development of BPF and the speed of us back quoting the features it would be very hard to maintain yeah exactly that was also my problem with it so I asked them directly would they be fine if we just like put the kernel down the features and then provide that as an output they were happy about that so what we can provide probably is a mapping of rail releases to upstream corner versions saying like in rail counter 8.3 we support features from the upstream up to 5.7 stuff like that I think it might be useful still okay I was thinking about like I put in the chat now you can just run this command and you basically get all of what's supported you have to boot the kernel and we can just provide that for download somewhere that would be like zero effort from us and people can actually check all the details they want I'll draft up an email with this suggestion awesome so we have about 4 minutes left if there are other questions or comments we are still here for 4 minutes so how many people are on this session who were in-person in Defconn last year just curious many of us on this at least the presenters were many of the presenters were there I was there I tried to create a poll but I cannot create a poll last year I attended a session which was in Defconn for the which was candy exchange so people from all over the world brought candy and they exchanged it and some of it was just amazing wonderful there's a whole culture I found out in Sweden I think where they have these spicy candies and they loved them and some of them were like I couldn't even handle them I mean at least I couldn't handle them but other people were enjoying it but my point is Defconn is a lot of fun there are these side tracks of side sessions like candy exchange and other things which are quite enjoyable hopefully we don't have COVID next year and we have an in-person Defconn and I'm going to try my best to attend these side stuff also there's two polls now yeah yes thank you okay two polls I was there yes okay cool okay folks thank you very much we will upload the slides with an added slide with the contact information Nikola and Jerry thank you very much for being such nice hosts the interface was awesome I'm going to send more feedback to the organizers it was very seamless and we loved it thank you very much