 Good morning. I'm Pavel Tsahina. I'm the lead developer of the Linux system or a project that I've had and my Colleague Telmas will join me later in the presentation to show us some examples. So make system administration boring again Why why boring? Well, there's a Chinese there are supposed to be at least a Chinese Curse may live in interesting times which Tells us that maybe interesting is not always a good thing and What can make system administrators life interesting in in not so positive sense? Well, there are obviously many things, but I will focus on on one in this talk so this is changes in system interfaces because the administrators interfaces are in general a stable then for instance API's and if you look into Various unit systems or even the new distributions or even say different versions of the same Linux distribution you will find out that you have quite a similar user user experience with you have just mostly the same applications We behave mostly the same way, but on the other hand the administrator experience will be very different and This of course has a negative impact on the administrators life because even when we upgrade from One major version to another major version of the same distro our automation like shells kids will break and our habits will be no longer relevant In general, it breaks our workforce. Sometimes Not in so really close ways like in this example, so In general We believe that Linux needs a simple simple management and because It's perceived by the administrators that Linux is too hard to manage in part because of the problem of Instability of the interfaces I highlighted in the previous slide and Also, we need automated management Notice we have we have configuration management tools like like Ansible. I do by if you are interesting or more in the Interactive experience and less into automating by automation There's a tool called cockpit and there was presentation just just before me about about the corporate project Anyway, how can this automated? Management systems help with interface evolution problem. I believe It's a great way to address this problem because in in Ansible you declare system state The desired system state instead of writing shell shell scripts you declared using playbooks But this is just the beginning of course because the playbooks can break if this interface is changed just as this as the shops could can do so How to address this this problem again Ansible provides already some some mechanisms and For instance Ansible has some Modules actual action plugins which can abstract away the differences like the package module can abstract the differences among the actual package management implementations like yum or opt there's a service module which serves abstraction above various systems like system D of status system 5 in the scripts but those are very very targeted Very targeted modules by purpose because modules in Ansible are targeted to one specific one specific case and for more general more general cases for More extensive systems in Ansible we have roles so we decided to address the problem using using Ansible roles and We created a Project called a Linux system roles In real we call it system roles, which is a collection of Ansible roles, which provide stable and consistent configuration interface to Supported version of operating systems, which now include real sent OS and Fedora and They are designed to manage various subsystems of those operating systems and With a consistent experience it means that you were right once the playbook and it will run against multiple versions and the subsystems that we manage are now Tele-synchronization, key-dump, network, post-fix and cnux. We have a better version of a storage role And we have a logging role in progress and in future we may add roles like firewall or hardware management So how we do it actually Some in some cases the interfaces don't evolve so much like in the case of cnux and They may be just changes in the names of packages. So This is quite easy to Handle but in some some cases we have more drastic changes and For this we have the concept of providers providers are different implementations of the same of the same functionality for instance For time synchronization for the NTP protocol. We have to implement the systems NTP D and crony and in real We replaced NTP D by crony and of course this means that the configuration far from us and the name of the utilities now have changed And this is a perfect example how we can handle this this change So we have a time sync call which is we just two providers NTP and crony Other similar cases a network configuration because in all the versions of rail We had the in its kept systems were configuration with files were located in ETC and Edited directly in new versions. We have network manager again to two providers for the network all In the case of logging we have just the artist look provided But there are multiple logging demons. So we may add support for Flandy if there's interest for instance and The point of those providers is that different providers of the same role have the ever common interface or at least a common common subset Functionality it means that you write a Playbook using this common interface into you don't have to worry which Provider is actually being used The role chooses the appropriate provider for the given operating system version So now some examples Some simple examples now selenux Let's say you need to Configure some selenux bullies and some selenux parts. So those are the variables for the selenux role to do this And then you execute the selenux role and that there are two configuration variables for one for bullies one for parts Selenux bullies purge and parts purge if there are no this means that those modifications are applied on top of existing modifications and If there is if there are yes, this means that previous modifications would be dropped and only those Modifications would then exist on the system. So you would return to a clean state and apply apply those And this is actually quite important point so in case we say purge no we Declarages changes to the system and if purchase we declared a complete state in our playbook why the purge variables are default to Know what would happen if they were deferred to yes Let's imagine for the sake of example that we want to configure faster Serving files using some bars, so we write a playbook in the playbook. We need to switch some some selenux bullying on so We put it into playbook we execute selenux role and then we want to serve some files of NFS So again, we write a playbook for NFS configuration. We switch some Selenux bullying on In this in this playbook and if we have the purchase in those playbooks and we applied the two playbooks Against the same host which is not an unlike a scenario that we would use NFS and somehow on the same file server We would just in the second playbook Clubbered the modifications in the first playbook, which is what we don't want. So that's why this Perg variable is defaulting to to know now this was very simple example and We have done much more sophisticated Configuration and I will invite my colleague the mass with the maintainer of the network role Which is our most most complex role actually and you will show you some example and some demo Thank you very much public. So this is just an example of the configuration for the network role and there it's happens in the one Variable code network connection and the important thing here is that instead of Configuring the interface configuration. It's about configuring connection profile. So this could mean for example in the beginning there's the connection ETH 0 and There's just a name and then it defaults to using the interface Name the same as the connection interface But for example in this case the profile is called weapon link a and then you have to specify What's the interface name? And this makes it more powerful to configure several possibilities and also one other important thing is that It abstracts the runtime state versus the persistent state. So this means that you can specify Whether or not the connection profile is active individually from whether or not it's on this So for example, if you would like to remove a profile from your system It might be that you still need it up until you want to reboot and then reboot in a clean new configuration. Therefore You might not want the profile to go down when you just remove this and there's something that is only possible if you have more than one state because You cannot express all these combinations and the wall itself it Supports all the important configuration options. So of course other net interfaces with IP configuration DHCP Datic and network set DNS settings and of course virtual interface types like bonding VLAN bridges and there's also support for infinity band and make make VLAN and You're probably running what this actually Configures, so that's the next step I will show you what will happen when I apply this configuration stuff all the important thing is it works on all Rail releases at the same time. This would be currently six seven and eight beta and The idea here is to configure DHCP on eth zero to for example as a management interface And at the same time have a bond interface called web bond, which is used for the application And then for the bond we also need to specify which Members are part and that's eth one and eth two and the configuration is basically the same except for the interface name and another possibility that the network wall provides is to ensure that The only the configuration that you specify in your play is the configuration that's on the system so you can really rely on having the whole network configuration in your configuration management system when the system still works after you apply this role and this the last item in the list Persistent state absence, so it doesn't state a name for the connection profile So it implies that just everything else that's not specified will be removed and now I will show you Shorts live demo So this is now a fedora Host with the system wall is installed. I will just run into the playbook On this playbook and then it will run on all three machines where six seven and eight We'll figure out what it needs to do because For example, the way six system uses in its script It's a back-end and the other systems network manager and applies the different state and at the end we see other three systems were applied successfully and For example, you can also see that it used in its grips for where six and network manager for a seven and eight And you can just take a quick look for example on the weld seven machine and See that the weapon was created successfully with both slave interfaces Now I will head it over back to power who will show us the challenges that we experienced and that might our life interesting So your life can be more boring So thank you till and some changes that we encountered so You may ask whether we are not just creating another standard to add to the audio listing long list of standard In famous 15 standard to some extent this is of course true, but When we are writing answer by automation, we are always doing this right we always specify some variables to our rules or play books which are kind of a new standard So it's a great opportunity to actually create a more abstract standard which will We will cover more versions of operating systems and shield us from from the from the changes there's more serious challenge though For instance The postfix so we may have noticed that we have a postfix role and you may have asked why why postfix and why not email because it would make great sense to have email role with multiple providers like postfix and send mail might be in exam or q-mail and We were thinking about this but we we found out that just the concepts are to the front and It's it would be hard to To do just email or which which would have like some abstract concept Which would then translate to the postfix or send mail send my concept. So in some cases it is just Not pretty not not practical to do an abstraction because the underlying systems To wire the divergent So of course, there's always limitation to this to this approach Another challenge was the granularity of of changes I explained in the selenux example that we have The ability to just add it to the previous modifications and and expect them But this is not always the case the selenux example was Kind of perfect example of this for instance in the network All we have a connection like that one always must specify a complete connection one cannot add or remove IP addresses or Change on a link settings with keeping the IP configuration intact Except for the taking the connection down or up and This this may be a limitation in some cases So we are thinking of possibly extending this model or in sync with Ansible networking which which is a network management for network devices like switches and actually today at 1 p.m There will be an Presentation about to M and a mistake project which provides unsub interface to network manager which is Compatible with our cyber network and which house to do those incremental updates to the network configuration In the case of time thing and key dump where we always replace the complete configuration except for Preserving the previous provider because time sinker detects a currently running provider and keeps keeps the currently running one Because we just thought it would not be practical to To for instance add a new NTP servers to the list of servers and so on On the other hand storage will be have will have final granularity of course because One doesn't want usually to remove and recreate volume with file system just to change one one attribute So this one will have Find a find a granularity of changes. So we took a pragmatic approach and the granularity of changes corresponds to To the use case that we are addressing Conclusion I would I would say that administrative slice will of course never be completely boring there will be always Interesting things both in the positive and negative sense our work at least I think Can help with addressing one or one source of negative surprises those changes to the system interfaces and Especially if you if you are using Ansible anyway Give it right to our role if they address your your scenarios because They provide us a simple interface to the system as well as Extracting away those those differences. So it's fully supported on 76 and later except for the post fix call which we consider still take the view and It's available as a RPM and for users of other systems You can use Ansible galaxy to install the the rules from from galaxy and There will be soon also available as an rpm scenario in in federal This is our home page and the links to the github project page and Ansible galaxy page. So Thank you then for attention and now it's time for questions