 Hi guys, this is me, because I can't talk into this. So this is Ricky, he came to work in Ansible Group, not too long ago, to work on all of our new, fancy, cool networking modules. One of our other coworkers is sort of the master of those things. No. I can do it. It's okay. Anyway, so Ricky's going to talk about automating the entire IT stack, including some stuff in there about networking. If you were dying to learn about networking, we're happy to probably chat with you about some of the state of that stuff. But there's the fun thing about Ansible is you can automate everything on Earth, and networking's in here. So just any poll on it. Okay. So yeah, so the title of this session is using Ansible to automate the entire IT stack. So who am I? So I'm Ricardo Carrillo Cruz, although you can just call me Ricky. I joined Ansible by Red Hat recently, which I'm very excited. I work primarily on Ansible networking, so I personally own the Cisco modules for routers. I also open the switch and a bunch of other stuff. I'm also a maintainer of the Ansible OpenStack modules, along with Monty, Jesse, Julia, and some other people, David, I think, also. And I previously work as an AppStream developer in OpenStack. I'm also part of the OpenStack info team, as Paul, Monty, and Jim. So what is Ansible? So at this time, I'm sure everybody knows it. I mean, there's been a bunch of sessions before me. So this is an automation platform. You can automate all things of your IT organization with it. It's simple because it uses YAML. Even if you don't know Ansible by reading a YAML file, you can sort of infer what's going on. It's agentless. You can install anything on your target servers to perform change. So Ansible is just going to connect to them, do its thing, move to next task, then move to next server, so on and so forth. So it's extensible. It has a plugin architecture. So pretty much all the major functions or behaviors of Ansible are plugin-based. For example, you are mostly familiar with Ansible SSH connection method, where you just connect to a machine. With SSH, you do a thing. But because it's a plugin-based, the connection type, we have plugins for WinRM for Windows. We have plugins for Docker containers that does a Docker exact thing. Chiroud, Paul said. There's also callback plugins. We have hooks along the playbook execution where you can do things. So, whoops, this is beeping. Oh. So we have also callback plugins. So Ansible, you can put hooks into a play execution and do things when a play starts, when a play ends, when this failure, when a task starts, so on and so forth. So you can actually write a plugin to do things during the stage of that play. So as a matter of fact, so we have a colleague, David Simard, who wrote a really cool callback plugin named Nara, which helps to visualize Ansible runs. And we use that in OpenStack infra. But this included lots of modules for doing all sorts of tasks, config management, provisioning, everything. So use cases. So yeah, you can do provisioning, whether you want to provision physical servers, there are Ansible projects. You can also provision cloud resources. Obviously, you can use this for config management. This is one of the greatest misconceptions about Ansible, is that most, a lot of people, they just think about Ansible for config management, but Ansible can do a lot of other things. You can do application deployment, continuous delivery, security and compliance. Overall, in my opinion, Ansible is a great orchestration tool because with orchestration, you can just do, you tell a machine what you want to do. So this is a typical IT stack. So you probably saw this in some slide in your company by your management, by your directors, probably variations of it. So typically, what you can find in our organization is you have the hardware, your networking devices, storage, physical servers, then you install an IAS OpenStack to better manage that. Lately, it's very popular to have COAs on top of it, put a Kubernetes or a PaaS on top of OpenStack. Then you have your virtual resources, which are the things that you create on your OpenStack or in your COEs. And then finally, your applications, web apps, whatever. So in this talk, I'm going to deep down, go from bottom up showing how Ansible as a project and as an ecosystem gives you tools to manage every layer. So we start with hardware layer and networking. So Ansible has over 250 network modules. It's an amazing list of network modules. So we have pretty much every major vendor there. So we have Cisco, Arista, Juniper, Huawei, F5, OpenSwitch, Dell, you name it. So we have modules for managing router switches, firewalls, load balancers as the accompanist of OpenSwitch. The modules for networking, they have a consistent interface. That's very important because most organizations, they're not just a Cisco shop or a Juniper shop. They have a variety of devices. So we strive to have a defined interface in our modules where you can find the same kind of logic amongst every vendor. So we have, for example, a config module for Juniper routers and for Cisco routers and for Arista routers and so on and so forth, which is going to configure those devices. We also have a command module that allows you to run one of commands in those devices. So that's what I mean by having that consistency. So we have multiple transport support. So obviously we support SSH, which is the fallback or default mechanism for managing devices. But we also support vendor APIs. So for example, Cisco, NXOS is a product line of switches that Cisco builds and they have something called NXAPI. We support it. Arista has also EAPI, which is something similar. Our modules we put into 2.3, a persistent connection framework. Aptil 2.3 network plays with Ansible where I'm very fast. The reason is because we have to use Paramiko and Paramiko doesn't have a concept of open SSH control persist. So the reason why we have to use Paramiko is because with the default SSH plugin, you expect to have a shell on the target device. That's not something you have in a network device because you were talking to a proprietary shell. So we're bound with Paramiko. So we build this framework that allows us to have that same functionality of control persist, which is about really opening an SSH connection and have it open during the entire play so you don't have to open and close the SSH connection on every task. For that persistent connection framework, so we wrote new connection plugins, Network CLI, which is the plugin for accessing the CLI on devices via SSH. We also have a NetComp on Juniper, which is a standard, it's an XML-RPC protocol that is defined in on IETF for managing network devices. We plan to double down NetComp on all our modules, not just Juniper but put into Cisco, Arista, so on and so forth. This is a test playbook by using network modules. In this case, so we're managing an iOS XR device, which is a carry grade writer on the Cisco portfolio. So we use this system module that allows us to configure settings on the device, in this case the main name, the main search. And then we have the iOS XR config module, which allows us to run a list of commands against a particular config subsection. In this case, we run those lines against the context of the interface to give it other net zero zero zero zero. So storage, we also have modules for managing storage. We have modules for managing NetApp devices, for managing Infinidad. We also have a module for block device partitioning, the part and module which allows us to create partitions on hard disks. We have the file system module for creating file systems, EXT4, EXT3, RiserFS. We have also modules for managing GlastroFS, ZFS. Also modules for logical volume management, LVM. And finally for storage transports like NFS and ISCASY. We have modules for ISCASY targets. And NFS, we can just use the mount module for defining the FS tab. That's it. This is an example playbook. So in this case, this could very well be used for preparing a storage node for an OpenStack Cloud. We partition this DB disk. We create the partition number one as an LVM type. And then we create a volume group called CinderVolumes, which itself is within the DebSDB1 physical volume. So now we move on to servers, which is the final part of the hardware layer. As an operator, so we need two provision servers, which is the process of putting an OS into a server. And then config management, which is managing the VOS of that server. So for provisioning, we have inventory and some sort of integrations with very popular provisioning systems like Cobbler or Farman. But I'm going to talk about By-First, because it's an OpenStack project, and it's also Ansible native. I'm not going to talk much about this because Julia made a talk this morning, and she knows a lot more than myself about By-First. It's just that we used it in infra with great success. It's a collection of playbooks and roster provision services. Just do that. It doesn't intend to be a CMDB like Farman does, or it's just for putting a base image on a server, and that's it, and it works really well. It uses Ironic in a standalone mode. If you attended the keynote this morning, there's a big effort for having OpenStack services to not be used just with OpenStack and being able to be using it in a standalone. Ironic is a service that allows you to do that. It's very simple to use. It's just made of three phases, so you install the By-First dependencies, which is the NSMask, Ironic, Dib, create a base image. You create an inventory where you define your servers, which MAC addresses of your interface, what host name you want on those servers, your RPMI username and passwords so you can boot them up. Then you enroll those servers into the Ironic database, and finally the deploy phase, which is just going to do the magic of booting up a map of powering on the machines with IPMI, booting with Pixi base image with an Ironic agent, talks to an Ironic server, pulls which image is supposed to be deployed in that server, and the agent deploys it, and done. It leverages the Ansible OpenStack Ironic module, so we have in the Ansible project modules to manage Ironic, and By-First just leverages those modules as a matter of fact, I think just the By-First developers wrote them for the most part. Then when we need to do config management, and obviously Ansible can do config management, it's known to be a great tool for doing it. You can do user management, package installation, service demon control, whether you need to configure files and services, you're covered. This is an example playbook. So with APT, we're installing Apache Do package, doing an apt-get update. That's what update cache means. Then we're configuring the virtual house comf ginger to file with some variable that we fit to the Ansible playbook binary. That's what the domain comf in double mostaches means. Then we make sure that we start the Apache Do service. We can also use Ansible in ad hoc mode. That's something that some people do not know this. You can actually use Ansible to reach the servers you have in your inventory and run things on it and get immediate results. For example, by using Ansible web servers dash a uptime, that's going to run uptime command against all your web servers defined in your inventory and you're going to get back the results. The same thing for reboot and the last one, that's a very neat one. If you want to gather facts about your web servers or whatever servers you have in inventory, just pass dash msetup. You're going to gather facts against all of them, spit them into your standard route, so it's very good for doing reports or checking out that there's some config drift or whatever. But anyway, ultimately, your Ansible playbook should be the source of truth. Just run Ansible at a hook as an emergency thing or for very little things. Just use your Ansible playbooks for deploying change in your servers. Put them into version control, code review in a CACD pipeline. So now we cover the hardware layer. We're now up into IAS, which obviously we're going to talk about OpenStack. So how, as operators, we can use Ansible to deploy OpenStack. So there are three main projects, TripleO Quickster, OpenStack Ansible, Call Ansible. So TripleO Quickster is just an Ansible wrapper that uses TripleO itself. So TripleO is an end-to-end solution to install, configure, and manage, monitor just everything on an OpenStack cloud. So the name stands for OpenStack or an OpenStack. That's what TripleO means. It has a concept of two clouds. So when you install OpenStack with TripleO, you install an all-in-one cloud that is called the undercloud. And then you leverage the components from that undercloud OpenStack to deploy the overcloud, which is going to be your tenant or end-user-facing cloud. That's what I mean by duck-food in OpenStack. Supports virtual bare-metal containerized overclouds. So I'm aware that TripleO, they're now moving into the dockerized services. This is a diagram that shows what they just depicted. So operator, undercloud, end-users, overcloud. And the TripleO Quickster just allows you to deploy TripleO in a very easy manner with Ansible for being 12 environments. It uses Libvert to create networks and VMs for both undercloud and overcloud. TripleO Quickster is great for dead environments with TripleO and also for CI purposes. They use it. TripleO folks use TripleO Quickster now in the gate. It's easily extensible by using TripleO Quickster extras. So there are roles in that project for installing TripleO Quickster on Bermuda, for example, they have roles for CI usage. So feel free to have a look and maybe contribute. Then we have the OpenStack Ansible project, which installs OpenStack on Bermuda and on Galaxy containers. You can also install services on Bermuda, but by default they install it in Galaxy. They allow better isolation and maintenance because every service process is within its container. So you can keep its very own dependencies and it's better for upgrading, for example. It deploys OpenStack services from source, so it doesn't leverage any kind of RPM or depth. You just do a git checkout, OpenStack Ansible, whatever tag of Newton or whatever, and that's the thing that it's going to use to deploy your cloud. It has some safe Ansible integration. A great example of how communities can just, you know, collaborate. The CEP folks, they thought that Ansible was great for installing and managing CEP and the OSA folks, they just integrate with them instead of reinventing the wheel. One of the cool features that it has is that it has a security hardening STIG role that you can run in post-installation. So STIG is some sort of security compliance standard, so you can run that role after you install your cloud and it's going to address any kind of security issues it finds to match that correct area. This is a workflow for OpenStack Ansible. It's very similar to other mechanisms. You prepare your deployment host, install Ansible. You prepare your target host, configure networking storage, configure deployment, which is configuring how many, what services you want in your cloud, what passwords, so on and so forth. Then you run the Playlist with Ansible and it's going to connect and create LXC containers for your configuration. Finally, we have the Call Ansible project. So the Caller project provides Docker containers for OpenStack services. So this is a very cool project. So they create containerized OpenStack... So containerized service for every OpenStack project. So for NOVA, it's going to create a NOVA conductor, a NOVA API container, and it's going to upload them into Docker Hub so anyone can just reuse them. As a matter of fact, the 3.0 folks, they start the containerized story and they're just leveraging those containers on a Caller project, which is very cool. The Call Ansible is the call as the employer of OpenStack. So it's just an Ansible project that leverages the Caller containers to install an OpenStack cloud. The configuration is really easy. It's just you configure globals.yaml and passwords.yaml. Just five parameters. It has a lot of very same defaults, so it's great to get going. It uses a custom configuration section instead of templating, so instead of exposing every possible parameter for every possible service in the Ansible role, you can just fit your particular customized NOVA conf into Caller and it's going to just put that into the image and that's what's going to be deployed when you install it. It's very fast to deploy. Very fast. Highly scalable. I recommend that you have a look and talk that Steve Dake and Sam Yepo gave in some other summit. They did a demo and think it took like 18 minutes to deploy a highly available OpenStack cloud. Really cool stuff. So now we've gone one layer up. I'm not going to talk about every possible password, every possible COE, obviously. I'm going to talk about Kubernetes because it's the most popular these days and OpioShift, which is a pass that is based on Kubernetes as well. So what can we do with Ansible to deploy Kubernetes? So we have two options here. We could use Magnum, which is COE as a service project in OpenStack. This is a very cool project. It's an API you can poke and it allows you to deploy COEs within OpenStack. It allows you to provision Kubernetes, DockerSwarm, and Mesus, and it does that by abstracting things in a cluster template and a cluster of tags, which is a construct for abstracting the various differences between Kubernetes, DockerSwarm, and Mesus clusters. Leverage OpenStack capabilities for authentication, volume, image management, networking. So you're going to use Keystone for deploying your clusters. It's going to use Cinder to expose volumes to your containers. And it's going to use Neutron to create networks to your containers. One of the cool things of this is that, for example, there's a new project, cool new project, well, I wouldn't say new, but it's called Courier that allows to connect or bridge your container networking with your OpenStack Neutron networking. So you can potentially have connectivity between your containers in Magnum and your VMs in Nova, which is actually super cool. Unfortunately, so there are no Ansible modules to manage the Magnum resources. So that's something I open that ticket and accept myself on the Ansible project. As time permits, I would like to just create modules to manage Magnum in the same way we can manage Nova servers and Neutron networks. So other option is just use the official Kubernetes Ansible playbooks. So if you go to the Kubernetes GitHub organization, so they have a Contrib repo that contains a bunch of stuff, within there's an Ansible folder which contains playbooks and roles to install Kubernetes. So you can use this project to install it in a bare metal that maybe you deploy with Aronic or with a VM that you deploy with your Nova. It's very easy to use. It's literally defining an Ansible inventory for your master, your CD cluster, your minions. You configure your Kubernetes options on groupvarsal.yaml, run deploy cluster, and you're done. It installs Kubernetes. So in the case of an OpenShift, OpenShift is a pass based on Kubernetes. It provides an easy way to manage the entire life cycle of applications. It uses the base building blocks of Kubernetes pods and services and all. But for an application developer, it gives a one-layer app. It really hides that kind of thing. So app developers can just focus on their app development and they can just forget on the infrastructure that is below. So I wanted to point out that the open source project for OpenShift is OpenShift Origin because there's been a bit of confusion lately because there's been a few announcements around OpenShift. We have now OpenShift Origin, OpenShift I.O., OpenShift Enterprise. So if you're looking for the OpenShift open source project, that's OpenShift Origin. That's right. So how we can install OpenShift. So unsurprisingly, the OpenShift folks, they thought that Danciable was a great thing to use to install and manage OpenShift. It's crazy how these days, they're coming at the facto standard for installing complex software. So that project contains play, because there are lots to deploy on anything. If you go to the OpenShift Danciable project, it's going to contain readme for installing an AWS GCE OpenStack. So it's going to provision the machines to host your OpenShift. But for generic installs, just define your inventory, your masters, your read city cluster, your maintenance just the same way as the Kubernetes Ansible installer. You want to bring your own configuration of playbook from OpenShift Ansible, and you're done. So now we want one layer app, and we have the virtual resources. So we have the IIS, we have the pass, and then we need a way to manage servers or resources from our OpenStack cloud or resources from our Kubernetes cluster. So we need a way to do resource management for those things. So for OpenStack, I'm not going to spend much here. I mean, Monty has spent quite a bit of time talking about this as a far better speaker than myself, so I'm not going to spend much for this. Let's say that with OpenStack, your cover, we have models for managing pretty much everything. Nova servers to create the images, to create volumes, to create neutron routers, networks, Swift object containers, everything. As an example, here we create a neutron router called myNetwork on myCloud, then we create a server called myServer on myCloud, attach it to myNetwork, and with a flavor with ID4, who knows if it's zeroes or whatever, and then we inject the Keeper Ansible key so we can later access Ansible and do runs on it. Yeah. So for Kubernetes, there's the Kubernetes module in Ansible, so we do not have a specialized module for every type of Kubernetes resource, but the Kubernetes modules allows you to fit in inline YAML, defining the resource you want to manage, or you can just point to a file containing the definition of the resource. So in this case, we are creating an M-space that is defined in that createNameSpace YAML against the Kubernetes on that IP with that username and password. And finally, we go up to Applications Layer. So here, we need a way as operators to manage deploy applications in the classic way, let's say, on VMs or Bermuda or container-based applications. So for deploying an application on VMs or Bermuda, we can use the base building blocks we saw when on the server hardware layer using the base building blocks of installing a package, starting a service, changing a file, or we can just use roles. What a role is. So Ansible roles allow playback authors to decouple code from data. So when you see a typical playbook, so you see at the top, host, whatever, you may have a vars section defining some variables and then tasks. So that means those tasks are going to run against the host that you defined at the top and with the variables that you defined at the playbook. That's way that works for an individual, but if you want to share that to someone else, that's a bit inconvenient because you are passing environment-specific things in that playbook that is not relevant to the user sharing that playbook. So a role just contains the tasks. It may have dependence of other roles, so a role for, I don't know, Apache 2 may have a dependency on a common role that has some baseline configuration like installing NTP or whatever. It may contain variables as well, but they're meant to be just defaults and variables that they're never... They won't be changing much, although you can pass variables to the role invocation as I was showing in a bit, and it can contain handlers. So, okay, so I can... We can write the role, we can share it, but what if I want to install an Nginx or Apache 2 or something that is more complex, my SQL, a cluster of, I don't know, ZooKeeper, and I do not want to write that on my own. So how can I search, you know, for that? So there's the Ansible Galaxy at that URL, which is a hub containing roles. So you can just go with a browser to that URL, search whatever you need, and you're pretty much assured that you're gonna find something. So in this case, for example, so I wanted to install a Django, and I didn't want to write it on my own, so I went to the Galaxy Hub with Django on the text box, and it gave me a bunch of roles with Django in the description. I opted to use that one, and I installed it with the Ansible Galaxy CLI. That means that I'm installing the Django role that belongs to the future I's user, their namespace, the roles. That's to avoid having 3,000 Nginx roles and not knowing what belongs to. And then the way you invoke that role in a playbook is that you use the roles directive. In this case, I'm saying, hey, I want to apply the future I's Django role against all my web servers, and by the way, pass this variable, release name, this value. So it's common for roles in the Ansible Galaxy is going to contain the readme, what variables are exposed in the role, and you are supposed to define your place. That's what I meant for the coupling code and data. So now we can use roles to install stuff on our VMs or for Meta, what we can do for containers. So it's pretty common these days. I mean, when you want to create a container, you create a Docker file, right? Although the Docker file syntax is a bit meh, let's say. So what can we do as an Ansible user, Ansible operator? I want to use Ansible to create my containerized applications. So thankfully, my colleagues at Ansible, they've been working on this new-ish Ansible container project that manages the entire workflow of containerized applications by using Ansible. With it, you can build the image, you can run containers locally, you can ship them into an OpenShift online, whatever. We can do a deploy command that is going to create an Ansible role, so I can transport that and install that container anywhere I want. It leverages Ansible roles already available to create container apps, so that's the cool thing is that because we have so many roles, why I should be writing Docker files. If I want to create an NGINX, I just want to use an NGINX Ansible role. It has Ansible Galaxy integration, so Ansible container roles, you can also find them on Ansible Galaxy. And this is a bit of an example of the workflow. So let's say that I want to create my web app, which is containerized. I run Ansible container in it. My user in my web app is going to create a skeleton. Among that skeleton is going to contain a container YAML file. The container YAML file is some sort of the same thing you would do with Docker comes pose YAML file. That's what you would put in this container YAML. That's why it is so similar to the Docker syntax. In this case, we're saying I want to define a web container that is going to be based on the Ubuntu trustee image. By the way, I want to use my web app role, Ansible role, to create that container image, and I want to expose those ports. And what it should run the container, that's the command line. So once you have that, you do an Ansible container build, it's going to take that container YAML, it's going to pull those roles from that container YAML, it's going to create the container image. And you can push that to Docker Hub or some of the registry, and you can even ship it to a GCE, Kubernetes, you can ship it to OpenShift that you may have somewhere or that you're using OpenShift online. So you can really use Ansible to do containerized applications as well. So what I was trying to convey in this talk is that as an individual or as an organization, if you invest in Ansible, you are rest assured that you can manage your entire stack. So it's truly one tool to rule them all. You're going to have to resort to some other tool. You are pretty sure that you're going to have everything you need, which is pretty much having an Ansible playbook role my stack YAML. And that's it. So you can find me on Freenow, I'm Marco Rilo Cruz, you can find me on Pan-Ansible, Pan-Ansible Devil, OpenStackShade, Shade is the library that we use for the Ansible modules, as Paul and Monty mentioned. I'm also hanging out on OpenStack Infra, and that's my Twitter, and I will be happy to talk to you now or whenever. Thank you.