 All right, so we have Fernando with us. So I'm guessing we are going to go with the pre-recorded session here. Yeah, it's fine. Okay, perfect. So I'm going to share that here in Hopin and since Fernando is here with us, if anybody has any questions, please feel free to put it in the chat. And I'll also post a link to the YouTube pre-recorded video. And if you're facing any technical glitches while viewing it here in Hopin, please feel free to directly watch it through YouTube. But also remember to keep your Hopin window open for the chat. And please post any questions. Hello, my name is Fernando. I work as a software engineer at Red Hat on the networking services team. My main work is on enemy state. So today I'm going to talk to you about our journey on gathering the runtime at work configuration. These are the main steps of our journey. And hopefully the final stop will be needs work. Okay, so enemy state simplifies network configuration of Linux hosts. But hope. Enemy state contains components which are live enemy state, Python library, and enemy state CTL, a command line tool. It provides a declarative API to manage network interfaces, routes, and DNS configuring. So enemy state contains three main features, which are reporting the current state. So the configuration of one host can be easily transferred to another one. Verifies the host configuration with the intended configuration when applying a decided state. And if something went wrong, or the verification fails, enemy states allow restoring the previous configuration. Well, here we have two Jamel used by enemy state. The first one is defining an Ethernet interface with a state app, IPv4 addresses configure, IPv6. And the right one is configured in a VLAN interface with the state app, the base base interface, and the ID. The thing is that the main idea here is that the user does not need to know the host. The user only need to know what do they want. This is an example of code of the usage of even a state Python library. Here we are gathering the current state, then we are getting the interfaces. And for each interface, we are setting the state as up. Finally, we are applying the state. And that's all. With nine lines of code, we can just bring up all the interfaces of the system. So leaving a state, allow the user to automate everything in a simple way using Python. Why is so important the runtime status for us? Well, enemy state must always show the runtime network status. If enemy state shows outdated or wrong network status, it will generate a lot of issues because a user will transfer that network status to another machine. And the new machine will contain an outdated or wrong network status. And that could be fatal for the user. In addition, when applying a decided state or a decided network configuration, we are verifying that the current configuration matches the state that the user requested. So if we are gathering the current state or the current configuration in our own way, that will generate verification fails. And that will be too bad. So our first option was to use Network Manager because Network Manager is the main provider of enemy state currently. And we were using Network Manager through PyG object library to manage the on disk or on memory profiles. So we decided that it could be a great idea to use it too for getting the runtime status. The thing is that doing that, we will not need to add more dependencies and to modify a lot the code base. So everything will be simpler. But Network Manager provides two different main objects to manage the network configuration, NM setting and NM device. NM setting is managing the profiles and NM device is managing the runtime information from kernel. So we tried to create the whole information from NM device objects, but it was not possible because there were missing objects for some interfaces or there were some objects that were not exposing all the information needed by us. And obviously, we could not use the NM setting object because NM setting is representing the profile. And the profile is an state that a user is defining. So if there are any changes on the kernel site or there are any changes by order tool, the profile will not update the information there. So if we got an information from the setting, it will it could be updated. Okay. So we cannot use Network Manager to get the runtime network configuration correctly. So what could we do? We thought on CcFS. CcFS is a sealed file system that exports multiple Linux kernel subsystem information, including the network subsystem. That's nice. CcFS is exposing the information of all kinds of interfaces, and that is very good for us. In addition, from NM state, it's really easy to read the CcFS information as we can do it as any other file on the system. What is the problem? Okay. We found several issues when working with CcFS. The first one was when using CcFS, we found that some exposing information was not matching the standards. And that is quite bad because we will need to create translators between CcFS and the standard way. In addition, in some cases, the exposed information on CcFS depends on the driver. For example, for SROV, the Intel EXGB driver and the Melanox MLX4 and 5 driver were exposing the information in different ways. We were not considering to implement different readers' base sets on the drivers because that will generate a lot of changes on the code and that will generate a lot of code that maybe it will not going to be used anytime. So we thought on sending a patch to get the kernel in case that we need to change, for example, the format of the CcFS information. But it is a process that is slow and we need to port it back to older versions. It could generate problems with other users. So we called them to do it. And also we found another big issue and it is that there can be race conditions or inconsistencies. So it is an ongoing transaction. So it is important to note that CcFS is not an API. So exposed parameters could be removed or changed. No providing backward compatibility. And that will be very, very bad for us because we are an API and we need to provide backward compatibility. Here we have an example of SROV. The first image is from EXGB device. And the second one is from MLX4 driver. As we can see on the first one, it is not exposing the information about the number of BFs, the offsets, the strides, the total BFs, the BF device. So they were missing a lot of information. And in other cases, the information were parsed in different ways. We got out of ideas, most of the issues that we were having on NM state were related with outdated information from the current configuration. And we thought that the best option is to use Natalie. But the implementation is too much code for NM state because accessing the link data with Python is complicated and we need few checkers. So we wouldn't want to implement, again, something that other projects implemented their own way of communicating with the kernel. So as other projects were implementing their own way of communicating with the kernel, why no one were doing it in a common way? So we thought that maybe that could be a great idea. But first of all, let's talk about Nellink. So what is Nellink? Nellink is a socket family used as an interface for communicating between the kernel space and the user space processing. All the information about interfaces are being exposed through this. Indeed, almost all the information of devices on the kernel are exposed through this. So Nellink, in addition, used the Linux kernel lock mechanisms like RCU to avoid race conditions or inconsistencies due to ongoing transactions. And it is a stable API, so it will maintain backward compatibility. Only additions are possible. This is very important for us because this is what we are doing right now. So Nellink is our solution. We are sure of this. And then we got on NISPOR. NISPOR is a native native library that provides unified interface for Linux network state querying. It provides native Rust, Python, and C APIs, allowing its use from other projects written in different languages. In addition, it provides a command line tool, too, and a Barling interface. So it is possible to use it from Golang or C++ if they access to it through the Barling interface. We noticed that all the projects like DC, Ansible Linux system role or network manager will benefit from it as they need to gather the runtime network configuration. This way, they could avoid implementing their own way, their own Nellink communication logic. And they could contribute to NISPOR and reduce the effort to communicate with the kernel. So this is the demo time. I have prepared several examples. So I can show you a little bit. I hope this time. So the first thing that they want to show about, the first thing that they want to show you about NISPOR is the roots. So we can get all the roots information just with this simple command. You can see here, for example, different roots. And for each root, we can see the family table, protocol, which is the scope, root type, like broadcast, unicast, the destination, the interface, and the preferred gateway. Let me search for an IPv6. I think I have one of them here. Yeah, we have a unicast one, four. I think I do have, yeah. Yeah, this one, maybe a better one. Okay, I don't think so. I will get a better one. So one, for example, this is an IPv6 root. And as you can see, we are showing table, protocol, scope, root type, and the flags, destination, interface, all the catch parameters, the metric, and the perf. Everything is done and is getting directly from the kernel. That means that if something modifies the rule, when using the link, we will not find any issue. And the configuration will be always updated. Let me show you, for example, this one. This is the state for a SNN device. As you can see here, this SNN device is down. We can see here the different interfaces like the e-phase state, the state, the MTU, the flags, and the MAC address. And I have created one. For example, this complex one, this BXLan. For the BXLan, we have here the state down, the MTU, flags, et cetera. But in addition, we have the whole BXLan parameters. Right? So we can see the remotes, the BXLan ID, the basic phase, TTL, TOS, as ageing, the UDP, UDP destination port, UDP checksum, enable or not. Let's see anything else. Yeah. Well, all the parameters that are related to the BXLan interfaces. So that's all. The main idea here is that NMS state, finally, was able to get the runtime network configuration. And other projects are interested on NIS4-2. So we are reducing the efforts and working together. And in my opinion, that's the main point of free software. Let me change. Okay. So that's all. Thank you for your attention. And now it's time for questions. So what are your questions? A couple of questions here in chat. I'll just quickly read them out for you. And I'm hoping you can answer them live here. So a question which Till Maas has asked is, will there be an Ansible module for NISPOR, maybe as a fact module or a plugin? Thank you for that question. That's a very good question. We have been thinking of this. Our first idea is to use NMS state on network role of Linux system rule. So this way, network role will be using NISPOR directly. But I have not thought about creating a module for NISPOR directly or fact module or plugin. So I think the best approach here or the approach that we are working on is using NMS state from the network role. But this is right now we are working on the design of this. So it's a midterm, locked-term feature. I will not expect it in short term. So yeah, I think I hope this is answering your question. Perfect. Thank you. We have another question. What is the, which Edward Berger is asking? So the question is, what's the status on InfiniBand support? Yeah, this is another good question. We are currently working on this. We have found some issues with what's in the link is exposing about the InfiniBand interfaces. But we are planning on fixing that problem that we found. But we are working on this right now. So I hope in short or midterm, we are going to have InfiniBand support on NISPOR. And the same for NMS state. We are working on it. We have a draft PR. So that is quite good. Probably for the not the next release of NMS state. But for the next one, we are going to have NMS state with InfiniBand support. And for NISPOR, I think it's going to take a little bit more. But I hope it will be ready soon. For that, Fernando, we have another question from Marcelo Lietner. They ask, how can I list all IP addresses for the system? In a similar note, does it make sense to make a fuse back end for NISPOR as a replacement for CISFS to avoid excessive forking and shell scripts? Well, what we are doing now is, I don't know if it makes sense to make a fuse back end for NISPOR. But I'm sure that what we are doing now is to provide a native REST Python or C API. So if you are a Python user, you can just use NISPOR to gather all the interfaces. And then in a for loop, you can gather all the IP configurations then list it. I think that it's the best approach now. Maybe we can think on something like the feature that you are proposing here, that NISPOR is directly showing the IP addresses, all IP addresses for the system. But we will need to relate it somehow related to the E-phase, to do not show IPs without context. But thank you for the suggestion. I will try to propose that and to talk about this with a DIM. And for now, I think the best option is to use another language to filter the IP addresses of each interface and showing them on a list or your preferred way. I hope that's an answer to your question. Thank you for that, Fernando. And thank you everybody for asking your questions. And please feel free to carry on the conversation in the breakout room. And I just posted a link to the breakout room where you can ask any more questions that you have. Thank you again, Fernando. Thank you very much. Thank you.