 Okay. Hi. I'm Tom. This is David. We work for Red Hat, and we're going to talk today about a new project called NetTools, which is a collection of libraries that implement network configuration discovery. So our focus with this project is to create a suite of low-level libraries that implement various standards for network configuration such as DHCP and related things. So we are not trying to do full networking configuration solutions. We're not going to configure IP addresses and so on, but we want to discover the network configuration that should be applied to your machine. So this should be a low-level library that should be integrated into a bigger network configuration solution, such as network manager, network D, and so on. We don't really invent anything new here. These standards we are implementing are old, from the 80s or whatever, and we just want to have the lowest possible level abstraction above it in C so that you can work with them easily. So it's not about making it, making any heuristics or any policy decisions at all. Now it's fine. So the protocols we are implementing are old, and the existing solutions for them are also old. Okay, let me start that sentence again. Okay, so we are implementing all protocols, and the existing tools are typically made a long time ago when the world was a very different place. You have stuff like DHCP CD, we are DH client, and so on, and these are typically black box solutions. So you tell DHCP CD to do DHCP on a certain link, and it does everything for you. So you start it, it runs, configures your link, set up networking, and that's it. So you just have an on-off button, and that's all. You cannot interact with it in any other way. And that made sense back in the day when you had a static machine where your network card existed at boot and it stayed the whole time, and you were just doing the standard thing. You wanted one IP address, and that's it. But these days, DHCP and other protocols are used in, firstly, more dynamic situations where things are coming and going, and you have more than one device, and you have all sorts of different things, and you have also newer technologies that want to use the same protocols, such as Wi-Fi directs, IP over Bluetooth, and so on. So they're using the same protocols, but in slightly different ways. So it doesn't really work anymore to take the existing black box solutions but also assumptions built in on top of the protocols and apply them in new settings. You really need to just work with the protocol itself so you can tweak it to whatever use case that you have. So what ended up happening is that lots of network configuration tools, such as Konman, Bluezy, NetworkD, and so on, they would bundle their own implementations as libraries inside of them of the different protocols, and because they needed more access to what was going on than some sort of black box would allow them. We'd work on that as well in SystemD as part of NetworkD. We did DHCP and other protocols, and we got requests from people saying, wouldn't it be nice if we could pull out this stuff from SystemD or at least expose the APIs from SystemD so that all the people could also be using the same protocols, which basically is what this project is all about. It's about making a publicly reusable API for network configuration. And though it's not as simple as simply taking the library that exists already in SystemD and exposing it to the outside. I mean, it suffers from the same thing as all the black box versions did. It comes with all of the assumptions that we could make because we knew the setting was being used. It was used in NetworkD, so whatever assumptions were made because we knew how NetworkD works was inside the library. So we wanted to make some libraries that didn't have any assumptions on top of the basic protocols. So we are not just taking out the libraries that exist, but we are sort of reworking the APIs quite a lot and reworking also some of the code. Okay. So as Tom already mentioned, and I want to show this with one example. We have a library called NACD, which is part of the NetTools project, and it implements IPv4 address conflict detection. You might not have heard of it. The RFC that defines it is basically technology to detect on a local network whether anybody else on the network uses the same IP address as you do. This is useful to debug network interfaces to see whether if something is going wrong on your network, maybe the problem is that other people use the same IP address. And it was used in that way for a long time, but there are several different more modern use cases where NACD is also used, where ACD in general is also used. And when we developed NACD, we had to keep all of these in mind, and it's not sufficient anymore to just have one black box that works in one of these situations, but we wanted to open it up. So we want to open up that black box and give you access to some more details of the protocol. For instance, for NACD, it can be used in three common example scenarios. You might be on a network where you have a static IP address. You selected it, you want to use it, you use it on the network. Of course, you want to make sure that nobody else uses the same IP address, because if they do, you have all kinds of different routing issues and packet loss and so on on your network, and you want to make sure that you at least detect that kind of situation. But of course, in an automatic way, you cannot react to it. You cannot say, okay, if that IP address is not there, pick another one, because you explicitly said you want a static IP address. So NACD needs to be suited for that use case. It needs to be able to deal with that. But there's also a different scenario. You might run DHCP, which is probably the most common option, and you get a lease from the server, so you get a signed IP address. If you detect that there's a conflict, you need to, like even the DHCP RFC mandates, that you need to treat this as a hard conflict, and you should decline the lease, reject the lease and request a new one. It's a completely different reaction you have to the conflict detection. And the third use case is, for instance, quite recent RFC. It's IPv4 link local address configuration. It's a way which was copied over from IPv6. It allows you to get an IP address on a local link, so on a local network, without configuring anything. And it uses exactly this as the core technology, because it just picks based on a heuristic, a random address, and uses conflict detection to see whether anybody else on the network uses that address as well. And if you do, it just picks the next one. So, ACD is a quite crucial part of one of these protocols, and it's expected that you get conflicts, at least on bigger networks. In other parts, it's really just used to have better diagnostics, for instance, if you have a static IP address. And we need to keep all of these situations in mind when we develop these libraries, and this is also why we make these libraries that can be deployed in all of these use cases. So, one of our crucial rules is to open up the black boxes, to not say there's a black box, there's one button, use it, but we explain how ACD works. We give you access to the API. You can use it as a black box. You can just say, run on this thing, tell me when there's a conflict. But you can also interact with it. You can react to the different events you get, and so on. Moreover, we don't only want to make sure that our APIs work in all of the different kind of use cases, as the ones that David spoke about, but we also want to make sure that you can integrate the library that we're writing in any sort of other software. We don't want to commit to some sort of specific library that you need to use, some sort of event loop or whatever else. So, you should be able to, whether you're using network manager with Glib, or if you're using network D with SD event, or anything else, we want the library to still work or to integrate nicely. So, basically, there should be no reason to use anything else. Basically, that's one of our aims. So, the way that we are doing that naturally is that we're making things not depend on any external event loop, just using the kernel EpoL-FT API directly, and making things as low-level as possible so you can integrate it nicely wherever you want. So far, it's worth mentioning that the NACD library that David spoke about has been integrated into network manager, and it will be part of the next release, as far as I know, if it's not already. And we are, of course, keeping in mind network D, which is what we worked on before, so we want to integrate it there as well in the future. We have, in the case of NACD, we have integrated into also our own library, so this is just not using any external library, no event loop library in here, this is just a small wrapper on top of NACD. So, IPv4 link local, as David mentioned, is a way of just grabbing us, picking for yourself an IP address without any external configuration at all. So, this relies crucially on NACD, and this is where ACD first originated. So, the idea is basically you. The library in IPv4 link local will grab an address at random, do ACD on it, if it turns out to be already used, it tries another one until it finds one that works, and it will give it to you. So, this shows that we can integrate our own libraries easily into each other, and lastly, I guess that all of that is sort of expected and straightforward, but lastly, we also had one last use case in mind, and that was the one nice thing we had about all tools. The black box would be just fork of a binary to do whatever setup you wanted, because you've got some sort of isolation, and when you're doing networking, that's typically a good thing. You want to be able to have your network facing binaries, processes, not in the same outer space as everything else. And if you're just using a library, you sort of lose that, because now the library context will be part of network manager, but in the HTTP libraries, and you've exploited all the network manager, or stuff like that. So, in order to still get back the isolation that the black box has allowed, we keep in mind that we want all our APIs to be designed so that they could easily, naturally be exposed over IPC. So, we want to, at least in principle, you should be able to just make a binary out of the API we have, and expose exactly the same, API over varlink, or debus, or any other sort of IPC protocol. And then on the remote side, the API should work basically the same, whether you use it remotely, or in process. So, that basically means that our APIs are designed to be asynchronous, so that you can basically message passing, even if it's running in the same process. So, if we talk about, you know, the API, so if we talk about that we're trying to have universal APIs, that we try to open up the black boxes, give direct access to the underneath part, so the question really becomes, what do we provide? Like, does that mean you have to understand all the underlying RFCs to make the use of that? And this is where our last part of this talk comes in. The things we do provide is that we try to integrate all these RFCs with Linux and how Linux works today. A lot of the protocols were implemented, or developed in the 80s, in the 90s, and implemented back then. And there were completely different constraints and different assumptions that people placed. And they were right back in the time, but they might no longer apply today. So, we took this opportunity when we rewrote most of the things or extracted them to look how these things apply today and whether they are new technologies we can use. And all of our libraries, we made sure that they had the most modern Linux code features, at least, in the best possible way. And I want to explain it with one example, which is the DHCP4 library, which is also part of the Naturals project, which is probably the most common protocol that we talk about today, the dynamic host configuration protocol. There was one, like, when we implemented it back in system D, after it was implemented, we got a report from somebody who deployed it on their production system and he told us he got a 30% increase of overall network performance and we were, like, surprised because, I mean, why does it matter how you configure your network that you get an overall performance increase on all packets sent over the network. And as it turned out, the, of course, the DHCP runs the entire time during your, while the network is up, because it needs to react and the release expires, the releases have revoked and so on. And it turned out that if you use kernel packet sockets, even if you filter on them, the filters are actually quite slow compared to when you don't do that. And what we made sure in our library, without knowing that this might even turn out as an issue, is that we always use, for instance, the appropriate or most high-level feature that kernel gives us. And this is one of the examples where we try to use, try to adapt these old protocols to how Linux works today and try to make use of features like EVPF today of the kernel interface. And this is one of the examples where we try to use, try to adapt these old protocols to how Linux works today and try to make use of EVPF today of the specific sockets that were created for specific protocols in the kernel. And we spend quite a lot of time just reading how the kernel actually does specific things, so to make sure we don't have race conditions there and so on. This makes us specific to Linux. This means we can no longer run on other systems, but at the same time this gives us really like a lot of benefits and makes it a lot easier to use these things. So to summarize and to maybe give a future outlook, the Nettoos project, it doesn't invent any new protocols. It's like DHCP is not a revolution and I mean it existed before. What we try to do is we ourselves had spent several times being required to implement DHCP in different use cases because different use cases popped up and we were always annoyed by the fact that it's so hard to adapt these protocols or these implementations to new use cases. So what we try with Nettoos is to provide to be as close as possible to the RFC and not place any assumptions ourselves so we don't want to restrict the user. At the same time, of course, that means it is a bit more difficult to use because you need to it's not just a button you press, but at the same time you can use it in so many use cases and there are so many new situations where for instance DHCP these days pops up. We have a dynamic Bluetooth IP over Bluetooth where at any point in time a new interface might pop up on your system and you need to configure DHCP. There is the Wi-Fi peer-to-peer specification which allows you to create one-to-one connections between devices but requires DHCP to configure that so you need to dynamically create a DHCP server and DHCP client just for one interaction and we have all these use cases in mind when we try to create these libraries with use cases in mind. And as a future outlook there is a new specification written by the IUTF called the HomeNet specification and what it does it tries to summarize all the old protocols like DHCP like ACD also the related protocols for IPv6 and tries to define how a HomeNet and with HomeNet they basically mean the network you have at a private home should look like how the protocols should interact and what things to do and which things might no longer be relevant today and they also have some new configuration protocols and then we kind of want the NetTools project to go into the direction eventually be a full complete implementation of this specification. The last time I looked the specification was still a draft but if anybody is interested I really recommend looking at it. Yes, I think that's it as an overview of NetTools if you have any questions feel free to ask. And the core implementation is in C or something? Yes we started this project as C and we continued it so far because most of our users are C users in particular we work with the network manager people to try to make sure they can use it at least and we never place any assumption that wouldn't work with them we didn't have any plans so far to change this and one of the problems like a lot of the things we do are really low level kind of APIs where we need to interact and the easiest way to do that is so far C might no longer true for a long time but so far we still make this all available as C libraries. Do you see it kind of spreading with other bindings or is kind of the goal to keep it low level? What we experimented ourselves is try to provide as Tom described earlier IPC APIs for the same libraries and we have for instance experimental debas wrappers that we try to adapt to make sure we can expose all these APIs over debas and you can like forget off your own process and use a private debas connection to talk to that process and get the same APIs and of course other bindings are then possible as well we have no stable guarantee so far for any of these experimental features except for the C API. So maybe a question for me like can you show like where this project lives like on github or somewhere so that interesting people can have a look. Sorry. I thought that was the last slide. Yes. Right now it's on github and the different projects are repositories there. There's also a mailing list called NetTools Devil at Google Groups where we have announcements whenever a new release is out where we discuss and welcome everybody to ask questions there or ask us how it could be integrated with different projects. It's all in the readme of the different projects as well. So thanks everyone. Give a round of applause for the two. Thanks a lot for the introduction.