 So my name is Rich Meggison. I'm the tech lead for the Linux System Roles project, and I'm here today to talk to you about how you can use Linux System Roles to manage standard operating environments. So the problem I want to talk about is, how do you manage a lab which has a diverse mix of machines and frequently changing requirements? So I need to manage and enforce a standard operating environment across all of my machines. I need to be able to keep up to date with security requirements and industry best practices, and I need to automate as much of this as possible. In case you're not familiar with Ansible, it's a tool for automating software management using an SSH-based protocol and simple YAML file configuration. It has a large library of support for many different software and devices, and supports a wide variety of platforms. Linux System Roles are a set of Ansible Roles used to manage operating system components. They provide a consistent interface to those components on different platforms and OS versions. They abstract the configuration interface from the implementation. So for example, the network role can be used to manage network settings for both the init scripts method and the network manager method. Similarly for the time sync role, it can be used to manage both NTP and crony. In most cases, the maintainers of the packages in Fedora and REL and in some cases, the upstream maintainers of those components, also contribute to the roles. So that way we can keep the role in sync with the actual component that it's being used for. Some of our roles are supported on REL6 and REL7. Some are not. For example, the session recording or T-LOG role is new for REL8. So we don't have support for that on REL6 or REL7. Also all roles work with a REL7 controller. So that would be an older version of Ansible like Ansible 2.8 and Gingya 2.7. So this is important if you have a dedicated Ansible controller node that you can't easily upgrade. You can use that node to deploy all of our roles. We also have a collection in Ansible Galaxy. It's under the Fedora namespace. It's called linux underscore system underscore roles. So by now we have quite a few roles that we have developed. We have some ones that have just come out such as SSH, VPN, SSHD, and crypto policies. In addition to these new roles, some of the existing roles have several new features such as the storage role now supports Lux encryption, the network role supports wireless and DNS options, and there's many, many more. And then also have listed some of the new roles we have planned in the pipeline. So these are the components used for managing my standard operating environment. Ansible provides the automation framework. System roles allows us to provide a consistent software management interface for a wide variety of system components across a diverse mix of platforms. The use of relatively static playbooks provides consistency in how the settings are applied. So once I get my playbooks debugged and working the way I want and I don't have to change them, then I don't have to worry about testing them. You know, I don't have to change them and then worry if I broke something. If I don't change it, I don't have to test it. So the use of the dynamic inventory allows me to add, remove host, update host groups and keep my settings up to date. And then if I keep my inventory and playbooks in Git, I can use a GitHub style workflow to redeploy when something has changed. So here we have an example of an Ansible play that applies several roles to all machines. This is my baseline common SOE configuration. So I have common settings that I want to apply to all my machines, kernel settings, security settings, time sink, kernel dump, and many more. In addition, I'm using the T-LOG role to report all login sessions to my lab machines. And I'm using the VPN role to ensure that all traffic is encrypted between my lab machines in case some services cannot use TLS. And then I'm using the network role to create a bonded link for a management plane or a backplane. One thing to notice here is that order matters. So we must do these lower level settings such as kernel settings and crypto policies first. And in some cases, they may actually have to reboot the machines in order to be applied before I can move on to what I'll call the higher level settings. So here's an example of my main inventory file. So this defines the host, the host groups, and some host and group specific settings. So for example, settings that would need a host name or a list of host names. But all of my other config is in group vars files. This inventory file I might keep and get, but I can also dynamically generate it. So this is my all.yaml. This is where I keep my common baseline configuration that I apply to all nodes. My kernel settings, crypto settings, time sync, SSHD, and lots more. So I can then add or override these with group specific settings in group vars files. So typically I would keep this and get and I would probably have to update this frequency frequently as new policies and new requirements require. So some of my machines are clients and some of my machines are servers. So these are plays that I would use for dedicated clients. The host notation means all hosts that are not in the logging servers group are my logging clients, for example. So I don't have a separate group vars inventory files for these. These settings are also in the all.yaml file. So here's some plays that do the additional management required for nodes in the dedicated server groups. So note that some system roles can manage both client and server hosts, such as the logging and metrics roles. But some roles are dedicated to one or the other. So we have a separate NBDE client and separate NBDE server role. And so here are the settings which are applied to the nodes in the logging servers group. This is in the logging servers.yaml group vars file. This configures the logging servers to ingest logs from the test lab machines and it stores those logs in local directories, one directory per host. So I could also use this to forward to some external log aggregator if I wanted. And then the same applies to metric servers and NFS servers. I have a group vars files for each one of those. All right, it's time for the demo. So I have a prerecorded demo. It's just going to use the collection. So I'm using our Linux system roles collection. I'm also using the Oasis roles collection for Man'sville Galaxy. I'm going to deploy three hosts. One of which is a server and two of which are clients. And I'm going to apply my SOE to all of them. And then I'm going to log in and check and make sure my settings are working. Setting up a standard operating environment using Ansible. This is the playbook that I'm using. Baseline setup, logging servers, metric servers. It's set up for, I have a main file. Before I define my hosts variables that depend on host settings. So now let's take a look at the group vars, my baseline, kernel settings, crypto policy, firewall, time sync. This is also where I have my client settings too. So you can see I have NFS client settings in here. I have my logging client settings in here. As you can see, I'm using the rel client. Look at the logging servers. So this is where I have my settings for my logging server. So I'm just logs from coming in from all the clients. And then I write them out to a logged or NFS server. So I have my storage volumes that I'm using on my NFS servers, server settings. I can see if our settings were applied. So let's take one of the servers. Well, the logging metrics and NFS server are all the same hosts here. So let's just find the IP address of that host and let's log into that server. First thing you see is that session recording is working. The session is being recorded. There's some various things here. Check and see if our NFS exports are working. So export FS reports the data volume that's being shared. Let's check our exports file. That looks correct. There's the data volume. Let's check our VPN IPsec traffic. You can see we have a log in this directory. We have subdirectories for each host client that we're gonna see what the logs look like from one of our clients. You can see there are a lot of logs here. Sub D service. Server settings were applied. We should have a sub directory here for each one of the clients. Create a file in the NFS shared directory. The next thing we'll do is we'll log into a client and we'll see if we can see that file from the client. So there it is. So we'll log out here. Let's look at our inventory again. Session here is also being recorded. NFS share and there's the file that we created on the server. We can DF output. We can see that the slash data directory is being NFS mounted from our server machine. Let's check the IPsec VPN traffic here. So we can see that to the server machine there is a lot of traffic in and out. So we're not doing anything directly. We can create a file there and then see if we can see it on the NFS server. Let's log back into the server machine. Let's see if our network settings were applied. So we were going to create a bridge. There's our two ethernet devices. There's our bridge and our bond. Okay. Demo is done. So here's some links to documentation and references. So there's our landing page. Here's our Ansible Galaxy page for our collection. And there's some other information and I'll have the links to this demo posted pretty soon with the other links to the other demos. Please provide us feedback. We'd love to hear what you have to say. There's a link to our IRC, our email. You can file issues at that link on our GitHub landing page or you can file issues and pull requests. Each role has an individual repo under our GitHub organization. You can file issues there as well and let us know. All right. Thank you very much. Thank you for your talk. We have some questions here. So if you could please go to them. Okay. Sure. That's under Q and A. Oh, okay. Do you remember how much time it took for the playbook and the demo to execute? I think it takes about, I wanna say about 10 minutes. Give or take. Let's see. Is the demo session code available on GitHub? Yes, it's available under rich M on GitHub under my rich M. User account. It's called devcom 2021. I need to clean it up a little bit and then I'll publish it under the Linux system roles landing page somewhere where we have our other demos.