 Hello, my name is Edward and we come from different teams in Red Hat. I'm from the OVIRT, the one that created the OVIRT. I'm in the network services team. We want to tell you about a challenge we had when we work on OVIRT. We had a problem that we wanted to configure the Linux networking. And we already do it today, but we do it using ICFG files. And with the future that ICFG files are slowly dying and not maintained anymore, we wanted to do something more current, which is mainly network manager today, and add all kinds of new features, which we were missing. And when we started working on that, it doesn't work. When we started working on that, we found out that actually many projects, some of them here, actually need the same thing. They also look to configure the network part of the Linux, and everyone is doing it very differently. Like OVIRT is using ICFG and IP route tool and netlink and all kinds of other options. OpenStuck uses many uses of ICFG, if I'm not mistaken. As you can see, everyone is doing something else. And it became obvious that if we start working with network manager and we invest in logic on how to do it, it makes sense to do something that can serve everyone. So we suggested that this is very complex. We want to simplify it for us and for everyone else, and hopefully the others will also contribute to our work. I'll try to show you what we mean by this NM state solution. This is how we configure a bond with slaves and an IP address using network manager and NM CLI. This is how we configure it using IP route tool. And this is how we configure it with ICFG. It's for each interface we have for the bond and ICFG file and two files for the slaves. This is how we configure it using NM state. So this is the project itself. So our intention and what we created until now is to have one layer and hopefully every one of the project and many others, at least these are the ones that we focus currently, with this declarative way to configure the Linux networking. And that layer will use the LibNM and network manager and whatever it's needed to configure the Linux. Now let's get to the design of NM state. Earlier we saw that for example Ovid was using a lot of different tools and this was because it needs to have the complete Linux network state, only part of this and one idea of NM state is really to capture everything that's important about networking on a Linux house in one central place. And this state should not be used only to be able to configure the networking but you can use the same format for reporting. For example, if you want to have a system that's configured the way you want and you want to replicate this you can just get report from the current state, maybe just network card names or MAC addresses and then apply the state to a different machine and you get the same result. And for this state definition we just want to do it declarative so this means it can be represented in JSON or YAML or as a Python dictionary with really simple values and then this allows other tools to build on top of this that also needs some kind of declarative interface and then all the logic can be in NM state and you only need to map for example the schema that we're using in NM state to the schema you're using somewhere else or maybe we will even adjust the schema in NM state because we are not really interested in defining a schema that everyone has to use or you want to make it easy to have some kind of declarative schema and we just had to choose something to start with. The design itself or the idea, it's inspired by an industry standard for network devices which is NETCOMP and YAML where you have something that defines the network configuration of these devices in a unified way and there's a lot of IFCs about this it's all XML so we didn't want to start doing XML but we preferred JSON or YAML which would still be easy to map from our declarative way to XML if you would like to add this. Another key feature that was important was that you can have atomic changes so for example you know this from network devices where you can check the configuration then apply it or you can collect the changes then check it, apply it and it will apply them or you can walk back and if it works networking you don't have this possibility nowadays so for example if you need to move the configuration from a single interface into a bridge or a bond device and it fails in between then you might end up in a state where you cannot access the machine anymore because you either need at least the IP address on the single interface or on the bridge interface and we want to make sure that when you define a state we verify whether or not it's correct we will walk back to the previous state so you can rely on having either the one state or the other one. The design it's currently based on network manager because it's the best solution to manage all kind of different networking aspects nowadays but of course it cannot handle everything that might be some edge cases for example for vendors that provide their proprietary solutions that customers would like to use and therefore we are also open to add other back ends for features that cannot be added to network manager and it might also be easier to add them now to any state because it's still a young project and then eventually they can also end up in network manager if it makes sense to implement them for a wider audience. Currently we already support the basic settings which make it easy to figure out whether this concept makes sense for someone so it's Ethernet devices of course with IPv6 and IPv4 configuration both static and dynamic addresses and basic aggregation types like bonding, Linux bridges and even initially OS bridges support. Currently we are mostly using what network manager is already providing and then also focusing on the other aspects like the configuration verification to ensure that everything is working well together. To do this there's of course a command line interface that makes it easy to show what we are working in to test this to demonstrate this but it's not the final interface that everyone else needs to use but for example we can use it here to just show what the state would look like so this would be, so here you have the reporting you get the current state of eth0 and then you can use this one to apply to a different machine and only for example change the interface name or if you want to use the API it's also very simple basically the same methods, you have a method to get the current state and you have applied to apply the state and for example if you just want to change the MTU you take the current state change the MTU value applied and then afterwards the MTU will be changed and then as a human you can easily look at the JSON configuration of the current house system in my opinion it's very self-explanatory so you just see what you need to do and you don't have to dig through different tools there's also from the Ansible network side a standard where they agreed on a common interface across several network modules where a lot of vendors provide support for their network devices like Cisco, Juniper and I think dozens others but there's nothing yet to configure Linux host networking the same thing and because with NM State we already have the possibility to map the declarative state to the Linux host networking state it's also very easy to implement these modules so we also started implementing them and they just create like the mapping from the configuration which we can see here at the left hand side this is how you would configure link aggregation and this would work across all kinds of different network devices and with NM State we have a module that would just translate this state to the NM State state representation and then you can apply it as well but from the user perspective it just looks like this and it's very easy for the programmer to do this to implement this even for other schemas if you like so we also talked about verification and raw back and so that's something that I would like to use to demonstrate the current state of NM State with a simple example so here we have one machine and on the right hand side you see the current network configuration as it's stored on disk and I have an example state file so this is just to configure a bond interface called WebBond with two slaves ETH1, ETH2 and also a VLAN on top of this then I can use NM State CTL set apply this, there's a lot of debug output and then on the right hand side you see there's the configuration files on disk and you can also run NM CLI to get the current connections and you see that the bond interface was created successfully and also the VLAN on top of this there's also the edit command so these are basically I didn't meant to edit everything so I can edit just the WebBond interfaces and then I want to reset the system so I can afterwards demonstrate you the so I just say epsilon for both the WebBond interface and the VLAN interface, I run it and you see the configuration is gone and also the interfaces are gone in the NM CLI output and now I will use this playbook copy it and, oh it's already there and so the idea is now to create a state that would fail and to artificially create a failure to show you the wallback I will just add something like fail2 so it's an invalid value but something that NM State cannot really find in the resulting output because there's no transition to get it report fail2 this means when I now apply this for a brief moment the interface will be created but then it notices here at the time that the outcome state doesn't have this property because nothing maps to this and then it walls back to the previous state and now I will hand over again to Eddie who will show us another feature that we implemented with NM State so does anyone is familiar with Kubernetes here? One of the work we found out that this one is also useful for allowing Kubernetes or some extension of Kubernetes that we hope will be accepted to configure the networking on the nodes today Kubernetes is not interested in managing the node networking so they don't care if you have a bond there or the bridge or whatever they just want connectivity to exist and they rely on someone else to do it so this is a suggestion that was presented in order to allow Kubernetes itself so we can define the node networking through Kubernetes itself through its API it's like an extension of it so what we do if you are interested you are welcome to click on that link after our talk but what we are doing there we have like two schemas one is called node-net-config policy and the second one is the node-network state the policy says something it has a match for example on the nodes and on the current state of the nodes and then applies snippet the policy will generate states for the nodes and then that state for the node will be applied like in our case so there is one I wrote here a small example so if we can say in the policy that on every node in the cluster on every SRIOV interface we want to define eight virtual functions and it will go to each of the nodes it will read the state of their nodes we will understand that the interfaces are SRIOV and will go and update the desired state there to eight so in this case for the first one ETH0 and ETH1 are SRIOV so it defines eight there and so on this is the main idea we hope that it will be accepted it's mainly interesting when containers will start working on bare metal not like today they are working on many virtual machines so then it will be very important to have something like that this is an example of the CRD how it looks like the beginning is some Kubernetes stuff and the rest is very similar to what we saw earlier it maps almost identically to the NM state schema we found that there were several challenges in working with declarative states which were not expected at the beginning at least one of them is when you change something for example in this example when you want to say IPv4 enable DHCP there sometimes this has implication on other parameters for example if you say DHCP enable and it gets an IP address the actual state the current state will have an IP address so how will you check that the desired state is actually like the current state so in some cases this one there is also auto negotiation of the link because you want to say that I want to have auto negotiation with speed 10 gig and maybe it got one gig something like that so this challenge needs to be resolved specifically to the case that we are working on and there are many debates on how to handle it this is a continuous challenge that we see every time we add something new yes so the other challenge is also if you want to remove something for example you only have the state for the actual interface configuration but not the possibility to explicitly say just remove this IP address in a declarative way except to get all the current IP addresses remove the one that you don't want to have and then say you want all the other ones so there are some limitations in how you can express things in a declarative way or you have to do some kind of compromises so I guess I see you are very excited about this you just want us to shut up and take your money but it's free software so you can just get it on nmstate.io and it's currently packaged already in Fedora it's available in Apple testing and we also have a copper repository where you get automatic reboots after every commit or master we have also some kind of CI that gates all the pull requests it's available on Pi as well so you can do pip install something you can run it from the GitHub repository and we're also working on supplying container images so you just can run a container and play on a container with network manager and nmstate CTL and see if it suits your needs and if you would like to participate in the development everything is happening openly so development is at GitHub where we have pull requests and the source code we also do some planning on JIRA so the Atlassian.net is the upstream free JIRA instance where you still need an account but you don't need any extra permissions to access this and we are currently just discussing topics on the network manager mailing list because it mainly affects things with network manager and there's an IRC channel on 3.0 IRC hashtag nmstate so what are we planning to develop in the future? So currently under review is already adding routing support which is also a little bit of challenge and a few other things but we also want to get your feedback if there's something that you would like to have that you see is important then just speak up now or what are your other questions and you will get some candy if you have to participate You have platform meetings to ask questions Please I would support for VBL based distributions So the question was what's the devian support and basically we are building on top of network manager so of course the problem with devian might be that network manager is too old if you have to stable devian but everything that's handled for network manager will work with nmstate as well because network manager is our abstraction layer Without network manager for embedded applications So you have a comparison memory But it is like the idea was to have Please review the question Oh sorry He is asking if we call it a provider the backend of how to implement the declarative way How to implement it not with using network manager but with something else that is maybe low level like netlink or something really low embedded system for example because network manager may be too costly So yes it was it is if someone will come and say this is very important and it makes sense either he can join the effort and create a provider for implementing it through something else it's like we are going to make it pluggable the providers part So currently network manager is the main focus but we will reverse the dependency so it will just plug in and implement the scheme and also we are working closely with the network manager team so if there are any specific concerns about resource consumption maybe they can be addressed to network manager directly and then network manager will also suit the use case on low power devices It was actually opened up in LRT so it was limited Yeah that's true But we can try a few if we are interested we can do a small demo to do something specific We are using a lot of features from network manager like the rollback portion which is like they give it to us we use it the persistency is done through network manager but it doesn't say we cannot do it without it You can start that it's written in the pipeline you don't really have run the pipeline at all How do you run it also? That was also real so we were talking with the CoreOS guys and they were requesting a bootstrap if they want to use it it's a problem for them to use the pipeline so one of the suggestions was to write it in some more compact go but it is like there is not enough force there at the moment but it is not something that we don't we think it's not possible What do you have? Already there are tens of network configuration it's not useful for the network equipment from different vendors and it's impossible in current state it's impossible to use just one declarative way to configure for example network switch and that will work with us to go to Anista and to Cisco and to some others But that's actually it exists currently Yes For example a large link aggregation is possible to do if you want to define an IP address it's possible to do it you just need to say what is on the other side only declaring an answer but you say it's Cisco it's you have to explicitly tell them Maybe we will talk just after the presentation because we are out of time now Okay Thank you very much and everyone else who enjoyed the presentation Thank you