 Well, hello, everybody. Happy Tuesday. I hope that the vendor events and sponsor events last night didn't keep anybody out too late or anything like that. I know tonight's going to be a lot of fun over at Fenway Park. Is anybody here for Red Hat Stomach last week? Yeah, did you go out and see the baseball game? So this one will be a little bit different from what I understand. The baseball game is in a way game, so it's on the Jumbotron. But it should be super exciting. So my name is Andrew Sullivan. I'm a technical marketing engineer with NetApp. So effectively what that means is I get no respect from marketing or engineering because both are in my title. But it also means that I get to spend a lot of time going out and talking with customers, talking with people who are using, implementing, deploying, taking advantage of, not just OpenStack here at OpenStack Summit, but really open source software in general. I'm a part of what we call the Open Ecosystems Team. So we're responsible for, at least inside of my company's portfolio, all of those open source integrations. So really what I want to talk about here today is a bit of a high level look at an overview of DevOps itself, but also the tools that developers, that applications teams are using on top of things like OpenStack, where we can take advantage of that infrastructure as a service that's being provided through OpenStack. So a couple of things to note before we get started. Remember, this is a level one session. So we're not going to be going deep dive into code or anything like that. I sat through the last session, which was fantastic about the security model inside a Trove. And we're going to look at a high level at a number of different tools. So just prepare yourself now for that general overview type of session. So before I get started, how many of us are infrastructure operators? We're deployers of OpenStack. How many of us would consider ourselves developers or application guys who are taking advantage of OpenStack? So it seems like we're mostly probably 60% operators, maybe 70% rate, which is kind of the general mix of OpenStack Summit as a whole. So when we look at something like DevOps, when we look at something like infrastructure as a service, really we're trying to answer one core question. How long does it take to deploy one line of code? So you think about how application development has been done for roughly the last 30 years. And we think of things like waterfall development. My product managers go out and they collect user requirements. And they might spend three or five or seven months doing that. And then they create this product requirements document that gets handed over to engineering. And they spend a month or two reviewing that and determining, well, how much effort is this is going to take? How much of this can we actually do before they actually begin writing any code? Some number of months later, they go through a test in a QA process. And then finally, at the end, we toss over the proverbial wall an application update or a new application. But in modern application development, in modern competitive environments, really the goal is to shorten that cycle. My company NetApp, we do storage. It's simply not feasible. It's not competitive anymore to have a release cycle that's a year or longer. We have to shorten that cycle. We need to release more frequently. And when we do release, we want to do so with quality code. So it's not just how long does it take? When I make a simple modification to the application, the proverbial or the colloquial example is a web application. I need to fix a typo. I need to add a new feature. How long does it take me to deploy that to production? But more importantly, how long does it take me to safely deploy that to production? What are the tools? What are the processes? What are the procedures that need to be in place across my organization in order to improve the quality and improve the safety of those deployments? So that's what we want to look at today. We want to look at this set of tools that are built or able to consume infrastructure as a service, OpenStack. So quite simply, I always like to start off these sessions with a level set. What is DevOps? There's roughly 100 people in this room, and I can almost, with certainty, guarantee that every one of us has a different definition of DevOps. Those of us who have been in the industry for more than about 10 minutes, remember cloud. Remember cloud back in the late 2000s, early teens? Everything had to be associated with the cloud. Even now, we have OpenStack, private cloud, AWS, Azure, all of these things. If you weren't cloud, you weren't cool. Well, the new cloud washing is now DevOps washing. We want to associate DevOps with everything, even when we don't necessarily know what we're talking about. We're quite understand what the problem is. So we want a level set. We want to go over, what is DevOps? How do we perceive that to be? And more importantly, when the rubber meets the road, what does that look like? We'll talk about, first, desired state configuration. How do we leverage that to achieve something like DevOps? And remember, I want to take the approach inside of this session of how does this, how does an application team, how does a developer leverage these configuration, desired state configuration tools to take advantage of our infrastructure? So I'll talk about this a little bit more when we get to that section. But remember, there's kind of two different ways that we can take advantage of those tools. We'll talk about containers, container orchestrators. I'm sure that you have been beaten to death with containers over the last few days, but do we necessarily understand why containers? I think most people at this point, particularly those of us on the infrastructure side, have a pretty good understanding of what a container is. But my experience as I go out and talk with people is not necessarily why containers have become such a thing, why they're being so widely adopted. And then finally, we want to talk a little bit about the CI and CD processes. Continuous integration, continuous deployment, or sometimes continuous, yes, continuous deployment. So we'll wrap up with that just a couple of tools there, some of the most common ones that we encounter, and how that really affects the process from end to end. Because ultimately, that's what we want to do. We want to improve the process of writing code to deploying code, and all of the steps that exist in between there. So without further ado, what is DevOps? Actually, I will stop for just one moment. Please, at any point in time, feel free to stop me to ask questions. I don't mind at all. I'm happy to answer questions. And honestly, I know this is hard with a big crowd here at a conference, but generally speaking, I hate PowerPoint. So I'd much rather answer questions and make this a valuable session for you rather than drone on through PowerPoint. So what is DevOps? If we take the easy route, if we go out and we plug into Google, what is DevOps, or DevOps is, we end up with a really interesting set of responses. These are quite literally the top three responses when you type in what is DevOps into Google. You'll notice that while they are drastically different, there are some commonalities here that I've highlighted in bold and blue, right? DevOps is a culture. It's a set of processes. It's a set of practices or philosophies. Only in one of those, the very bottom one, does it mention tools. And even then, it's only in the very generic tools. We're not calling out anything specific. You see, DevOps is, at its very core, nothing more than facilitating communication between operations teams and development teams. It's not a fancy set of tools. If you're using, I'll use my company's products, right? If you're using ONTAP, that doesn't automatically mean that you're doing DevOps, right? If you're using Jenkins, it doesn't automatically mean that you're doing DevOps. DevOps is collaborating. It is communicating. It is development and operations working together to facilitate a better outcome. And I very specifically say a better outcome. Unfortunately, no matter how hard we try, I cannot make a developer write better code. I just, I can't. I wish I could. But what I can do is ensure that what comes out on the other end is better quality, right? We wanna make sure that we're not creating bugs, at least not as many as possible, right? We wanna make sure that we're getting as much done in as short a period as possible in order to meet the requirements of the business. So ultimately, DevOps is the practice of enablement. We want to increase the quality and, importantly, the pace of delivery for applications. So you notice in this Venn diagram here, which I kind of like a lot, who's created by one of my coworkers, Josh Atwell. There's three types of people that we tend to think of as developers. Application developers, right? This is who's actually writing applications. Typically, they're closely aligned with a business unit or business operations. We have infrastructure developers. Interestingly, these are the people who are, for example, automating against storage systems or the network or automating against our servers. And then we have operations development. How many of us have ever heard of or maybe we have or maybe we are on a DevOps team at our organization, right? Oftentimes, operations development and a DevOps team are the same thing. They are responsible for automating the interaction between applications and the underlying infrastructure. They're abstracting that infrastructure, creating that infrastructure as a service, whatever that happens to be, storage network compute, and offering that up to those resources. So even though we say DevOps is not a thing, DevOps is not a tangible, my storage operating system cannot do 700,000 DevOps. It can only do a number of IOPS, right? Even though we say that it is an ephemeral thing, we do see people with DevOps in their title. And that's okay. It means that they're facilitating these processes. Now, when the rubber meets the road, however, when we actually look at, because you're probably thinking, great, Andrew, that's a fantastic job. You just told us that DevOps is nothing. It's the great ether, the great beyonds, right? What was the movie with Russell Crowe, right? There was once a vision of Rome, right? Rome was a thing, no. When we talk about DevOps, where the rubber meets the road when we're actually implementing, what we're concerned about is automation. Particularly for those of us who are on the infrastructure side. Where these two things meet is through automation. We want to work with our development peers or the development team wants to work with the infrastructure peers in order to determine how can I best get the resources that I need. I like to say that Shadow IT was born out of a culture of no. As an operations team, when the applications guys come to you and say, well, I need 100 terabytes and 500,000 IOPS or I need 600 CPU cores or 5,000 virtual machines. When we say no, that doesn't make their requirement less valid. It means that they're gonna find some way else to deliver that. So ultimately we want to figure out how to get away from that culture of no through the use and the philosophy behind DevOps. So I said we were gonna look at tools. I'm not going to terrify you. I'm not going to step through every one of these tools or every one of the categories on this. This is a fantastic chart from the ZBL Labs guys that looks at a high level of the DevOps tools ecosystem. I highly recommend that you go to, if you're interested in any of these, check out the ZBL Labs website. It's a great reference point for looking at all these different categories of tools, getting an idea of what they're differently capable of, et cetera. That being said, this is just scratching the surface. I started to put a slide in this deck that's about 400 different logos of different things, but it's so crowded and obnoxious that everybody forgets what I'm talking about and just tries to figure out where their company is or where their favorite tool is. So what we wanna do is look at a couple of the different categories in here, in particular, the ones that are relevant to, well, taking advantage of infrastructure as a service. So the first one I wanna talk about is DevOps through desired state. So how many of us are using desired state configuration tools? Puppet, Chef, Ansible, Fuel, et cetera. So we're mostly familiar with these, and I would be willing to bet that a good chunk of us that are using these are probably operations people. So we think of desired state configuration as a way to take our infrastructure, physical resources, sometimes virtual resources, and configure them a specific way. I'm a storage guy. I want to deploy X number of LUNs. I want to deploy Y number of NFS exports that have these configuration options set. But if I'm an application person, I'm more concerned with things like, well, I need to deploy six virtual machines. Three of them need to be sent. I lost two of them Red Hat Enterprise Linux, and one of them is Ubuntu. And I need to have these packages installed. And these packages need to have these configuration settings, and these services need to be started, and they need to connect to each other through these firewall rules. What I'm defining is my application. I'm defining how to deploy my application inside of the infrastructure. I don't really care how the infrastructure is configured, what it looks like, tastes like, smells like. What I care about is how my application is deployed. So desired state configuration gives us that ability at both the infrastructure level and at the application level to create those consistent, scalable, reliable, automated deployments. I know that if I deploy my application into production, it's going to deploy the same as when I deployed it into test. It's going to deploy as the same as when I deployed it into my laptop for development. Desired state configuration gives me that reliability. So when we look across the tool chain here, there's really three that come out that are very common across both operations as well as development teams. The most common of these is Puppet, Chef, and Ansible. So before I go any further, before I and I apologize to anybody from, for example, Morantis in the room, there's a very distinct reason why I did not include Morantis fuel in this particular list, even though if you look at the user survey and I have a slide next, Morantis is the third most popular one. Fuel, in my experience, is used almost entirely by operations people, infrastructure people. We use it to deploy OpenStack itself. I've yet to encounter an applications person. I'm not saying they're not out there, just Andrew's anecdotal evidence that we're using it to deploy resources, that we're using it to configure applications being deployed. The other one, possibly notably, yes sir, got you. So the comment was that Fuel actually uses Puppet underneath the covers. So that makes a lot of sense then. Thank you very much. So the other possibly notable experience, or excuse me, exclusion from this list is PowerShell. There is a surprising number of organizations that are leveraging PowerShell pretty heavily. However, desired state configuration inside of the PowerShell ecosystem is still relatively new and is still not widely adopted. So that being said, it's Microsoft. They are kind of a 800 pound gorilla. I would expect that to increase over time. So as we look at these three configuration management tools, they all accomplish roughly the same goal in slightly different manners. Puppet and Chef are the two oldest of these. I believe that Chef technically came first. Puppet was not long after. Therefore they have massive community support. Basically anything and everything you could possibly want to do has probably already been done and there's a module that exists for it for these two. Ansible is a relative newcomer. Ansible was an independent company that was recently bought by Red Hat. As a consequence, Red Hat is now using Ansible for all of their automation orchestration type tasks. Interestingly, both Puppet and Chef are what are thought of as pull tools. I have an agent on the host that reaches out to that central management server and says, how should I configure myself? Ansible can act in either manner. If you're using Ansible Tower, it can pull. If you're using Ansible, you can push out through something as simple as SSH commands to a number of different hosts. So what we tend to see is applications being defined using a Puppet manifest. I am storage-centric, so I apologize that most of my examples are going to be storage-centric, but I want to do things like, for my production application, I have a volume that's 100 gigabytes in size. I don't care if it's NFS or iSCSI. 100 gigabyte volume that's capable of having snapshots. In my test environment, I want to clone that production volume underneath it. And on top of that, I'm going to stand up the same set of virtual machines. The same set of infrastructure and deploy the same set of resources. Maybe in development, I only deploy one instance of each. So these tools make it super easy for me as an applications guy to consume infrastructure. That's all we're doing. We're consuming infrastructure and we're defining it programmatically because that's the other side of these. It's defined in code, usually YAML or JSON files, which means that it can also be checked into my source control. So I can see over time how my application has changed. I can revert back at any point in time if something goes wrong. So this particular chart, as I mentioned or alluded to just a moment ago, comes from this year's the April 2017 user survey. So if you haven't seen this yet, it's a bit of an interesting chart, notably because Ansible and Puppet last year were neck and neck. They were both at 43%. This year you can see that Puppet has dropped somewhat behind Ansible. Ansible is becoming a massively popular tool. That's not to say that Puppet or Chef or any of the others are any less capable. Simply that Ansible is becoming more popular as time goes on. So this brings me to containers. Containers is where I focus most of my time. Containers is a passion of mine. And containers are an interesting technology. Most of us think of containers as Docker. Docker came about roughly 2013. Before that, we never gave much thought to it. But the reality is the concept, the technology behind containers has existed in the Linux kernel since roughly 2006, 2007 timeframe. Google introduced namespaces and C groups to the kernel way back a decade ago. And the technology itself, if you're familiar with containers at all, is very reminiscent, very like, things like Solaris Zones. Anybody has Solaris admin? Do you remember using Zones? I'm sorry, I work for Oracle. So Zones, BSD Jails, all of these other technologies that mimic that sort of isolation at the application layer. So the first thing that's important to remember about containers is they're not a virtualization technology. A lot of times we confuse them with virtualization, but they're not an actual virtualization technology. We are simply isolating a process. Whether that process is Bash or Java or Python or a compiled binary, whatever it happens to be, we are isolating that process at the kernel level and through namespaces assigning, deliberately assigning resources to that container, to that process and what it has access to. So when we think of when I'm running a Red Hat or a Sintos or a CoreOS or whatever happened, flavor of Linux that has Docker installed to it. And I say Docker run Ubuntu and I get dropped into a Bash command prompt that looks like, tastes like, smells like, feels like Ubuntu. The reality is all that system has done is instantiate a Bash process and then take the Ubuntu file system layout and attach it at the root of that process's namespace, file system namespace. So I have access to all of the tools, the libraries, all of the binaries, all of the things that I need for my application inside of there accessible, but I don't have a full operating system. There is no system D or init inside of that container. There is no cron. There's none of the things that you would associate with a full operating system. And importantly, this means that, well, I don't have to love it. I don't have to care for it like I do a full operating system. Nobody has to go in and patch it and then bring it back up. We simply update the container image and redeploy. So containers are a lightweight way of instantiating processes and then isolating them from the rest of the system. Do note, however, that it is still a shared kernel model. So if the kernel becomes compromised through one particular container, that means the entire host is compromised. This is different than a hypervisor, right? Hypervisors provide kernel level isolation to those virtual machines. So I would have to escalate out of the virtual machine and into the hypervisor to compromise the host. So I briefly mentioned Docker a minute ago. Docker is what I like to call the Kleenex of containers, right? Most of us think of Kleenex when we're actually thinking of facial tissue, the generic name, right? Docker is the brand name of containers. So Docker came along a few years ago and their biggest thing is that they make containers accessible to mortals like me. There was a really interesting session at DockerCon this year where there was a very nice lady who presented on stage and she walked through the process and go of instantiating a container, a series of namespaces. And it's somewhere in the 100 to 120 lines of code and go to create these namespaces. Docker reduces all of that down to one command, Docker run. Anybody and everybody can now use containers. And we use containers to importantly decouple the application from the operating system. You see virtualization decoupled the operating system from the hardware. VMware, VMware made their name arguably off of the concept of vMotion. I can move virtual machine from host A to host B. I'm now decoupled. Containers have the same principle. My application is encapsulated. My application is isolated. I don't have any external dependencies outside of the container, which means that the only thing that I need is a compatible kernel. For Linux containers, that's roughly kernel 3.10 and above. So now I can take an application and I can know without a doubt that whether I'm executed on my laptop, whether I'm executing in the on-premises data center, the off-premises data center, the hyperscale cloud on my buddy's laptop, it's going to execute exactly the same. So containers facilitate that concept of moving code through the process. Containers make it easy to get through that process. So for us as infrastructure people, that doesn't mean that our applications are actually changing. Still the same application. If it's a Java application that needs four CPUs and 10 gigs of RAM when it's in a virtual machine, it's probably going to need the exact same set of resources in a container, minus any operating system overhead. So we need to be aware, we need to plan for from a capacity standpoint what all of this means. But it also means that in large part, we eliminate things like the proverbial, it ran on my laptop, I don't know what's wrong with you guys in operations, problem, because it's the exact set of libraries. Everything moves with the container as we go along. So Docker is a fantastic tool. Docker gives us the ability to very simply through the command line, create these container images that have all of my application bits and pieces inside of there. It simplifies things like those puppet manifests, chef recipes that we talked about earlier. Instead of having to define, I need a Red Hat Enterprise Linux operating system with these packages and these configuration files and all these other things inside of there, instead I create a container that has all of that. So my chef recipe or my puppet manifest now gets reduced to find a host, start this container on it, because all of those other configuration things are taken care of. So we reduce friction, we make it simpler to deploy our applications across that ecosystem. But if I look at a single container, it's really not that entertaining. Most of the time our applications are more complex, they are larger, they scale far higher than what we want on a single host. Think about most even simple web applications have at least two different services. There's something like an Apache service to serve out web pages, there's probably something like a database to serve as a persistence layer inside of there. So when I have multiple containers, multiple different things happening in my application, I want to rely on an orchestrator. And orchestrators do many, many different things to make my application administrator, application developers life easier. Now at the least of these is the top bullet there, application deployment. For those of us that are familiar with the VMware ecosystem, you can think of this as being a DRS cluster, a pool of resources, where I just blindly throw virtual machines at it. With a container orchestrator, it's I need to deploy these 500 containers, just make it happen. Make sure that they can all talk to each other how I've defined, I don't care where they land, just make it happen for me. So they make it easier to deploy our application. But not only that, they do things like service discovery. You can think of that as being an internal DNS service. If I have a microservice that represents the database, I don't want to have to know the internal IP address. I don't want to have to know the details of that. I just want to say, give me the database connection. And it abstracts all of that away for us. Maybe I need a load balancer service. Maybe I need a rediscashing tier, whatever that happens to be. Service discovery makes it easy to find those. I'm not going to step through all of these. A couple of other important ones, authentication authorization. Of course it's important we want to make sure that people are only getting access to the tools, the container instances that they have access to or should have access to. Secrets management is an interesting one. We want to make sure that we're not inside of our containers hard coding things like passwords. That's kind of a bad security practice. I promise I won't tell your security guy. So the secrets management tools allow us to store those externally and safely access those so that they're not being stored inside of our container images. Logging and monitoring are, of course, critical. We want to know what's happening, where. It does get more complex. If I've deployed 500 instances of Apache, how do I accumulate all of those logs together? Again, we have tool sets inside of the container orchestrators that make all of this much, much easier. So again, if we look at the OpenStack user survey from this year, we can see that there's a number of different orchestrators that are being adopted. There's also a clear, I won't say winner, there's a clear leader here. Kubernetes is, from my, again, anecdotal experience, by far the most common orchestrator that we encounter. Kubernetes is in roughly 80% of the organizations that I go and visit in various stages of pre- and post-production utilization. Interestingly, the build your own in this order of them is pretty much in line with exactly what I see as well. OpenShift, I take a little bit tongue-in-cheek here because OpenShift is actually a pass built on top of Kubernetes. So Kubernetes underpins OpenShift. DockerSwarm, Cloud Foundry, Mesos, all of these others. Mesos is kind of an interesting one in that numbers-wise, the number of distinct organizations that are deploying Mesos is not as high as the others, but when we see a Mesos deployment, it is typically extremely large. This is because Mesos has been used for a number of years for big data analytics. I can run Spark, Hadoop, Cassandra, all of these other tools, the Elk stack, directly on top of there. And while I've got 100 nodes or 1,000 nodes running Mesos, I may as well toss some containers in there with Marathon. So we tend to see very large clusters, but not nearly as many with Mesos. So I want to talk about Kubernetes in particular. And arguably the hardest part of a container deployment is the persistence piece. Now, you may be thinking, Andrew, you're biased. You work for a storage company, and that may be true. I won't deny that. But most of the time, the compute resources are fairly well understood. I need to deploy a container. The networking, interestingly, has been abstracted away to the point in container, particularly the orchestrators, where I don't have to think about it. Most of the hypervisor integration companies, we've spent years working on how to get overlay networking to work and work well with virtual machines. The developers who work with Kubernetes, these who work with the container orchestrators, made it completely transparent. The hardest part is, generally speaking, making sure that if you're using Flannel, for example, the subnets lineup. It's really gotten very simple. But storage persistence is still a complex solution. We still have to figure out how to connect persistent storage devices up to my application. And importantly, we also want to empower those application users to consume the storage resources that they need when they need them. So how does persistence work in Kubernetes? It works in two different ways in modern versions of Kubernetes. So the first one, the one that has existed all along, works off of the principle of manual provisioning. I have an application that I want to deploy to my Kubernetes cluster. It needs some persistent storage. My Kubernetes administrator goes to the storage team and says, hey, guys, can I have some volumes? The application, or excuse me, the storage administrator provisions some number of volumes. Generally speaking, interestingly, we see these as being very small volumes. We see a lot of 1, 5, 10, and 20 gigabyte volumes, not a bunch of 100, 500 terabyte size volumes. So they deploy a number of volumes that then sit there and wait to be utilized. The application comes along and says, I need three gigabytes of storage. Kubernetes looks at its pool of persistent volumes, PVs. And it says, OK, I've got a five gigabyte volume that meets your requirements, so I'm going to assign this over to you. Kubernetes then manages the connectivity. It manages connecting that storage to wherever the application happens to be running at. If it's host 12, it connects it in using iSCSI, NFS, whatever protocol you happen to be using. But this is cumbersome for a number of different reasons. One, remember, the storage guy had to provision that. Knowing most storage guys, it probably wasn't done in an automated fashion. It was probably done at a command line or using an Excel spreadsheet. I've seen that. It's scary. They sit in an idle pool. They were introduced by the Kubernetes administrator, and they sit in an idle pool waiting to be consumed. That's capacity that I can't use for other things. And then finally, you notice that from an efficiency standpoint, my application asked for three gigabytes, but it was given five. It was over-provisioned. So starting in Kubernetes 1.4 or OpenShift 3.4, if OpenShift is your thing, they introduced the concept of storage classes, which greatly simplifies this whole process. We're now the Kubernetes cluster administrator and the storage administrator work together to define classes of storage. I went with the incredibly creative and unique gold, silver, and bronze. But it could be monkey, giraffe, elephant. It could be blue, yellow, green, whatever those happen to be. My gold storage is maybe something all flash. My bronze storage is maybe something that's all SATA. When the application needs storage, it now says I need three gigabytes of bronze storage. Kubernetes reaches out to that storage provisioner and says, please give me three gigabytes of bronze. It's provisioned in real-time, handed back, and then given to the application. So we can see that now we don't have this issue of over-provisioning. We don't have this issue of resources sitting idle. We can consume them on demand. Going forward, we're hoping to be able to introduce things like, wouldn't it be great if we could snapshot those? Wouldn't it be great if we could clone those for the application guys to take advantage of that data? However, Kubernetes is still a little ways away from that. But it's things that that community, much like the OpenStack community, is looking at. If you're thinking to yourself, this looks an awful lot like Cinder, you're probably right. It does look an awful lot like Cinder. So Cinder is the same basic premise of enabling the tenant, the application, to consume and use the storage resources that they need, when and where, and how they need them. So same premise just applied at a different level. So the last section that I'll talk about is the continuous integration and continuous deployment tools. Now there is a massive number of tools. Some are on-premises, some are off-premises. There's a lot of them that are deployed in the cloud. In fact, my application is being distorged. For example, out in the open in GitHub, I can point these at my GitHub repository and simply ask it to pull it, build it, and test it. And it'll do all of that without ever using on-premises resources. With that being said, when we do look at the on-prem data center, by far the most frequent tool that we encounter is Jenkins. So I was an infrastructure guy. I was a virtualization guy. I was a storage guy for 15 years before I got involved into the DevOps ecosystem a few years ago. Jenkins was a big black box to me. It was this confusing mess of, what do you mean you're compiling things and what is compiling anyways, right? All of these other things. But at the core, Jenkins is simply an automation tool. If you look at Jenkins at a high level, and I like to use the Jenkins pipeline example, all we're doing is defining a series of steps that the developer would normally do manually. Instead of having them have a checklist that says, okay, check out the most recent revision of the code. Execute this command to build it. Okay, now that it's built, try and do these things with it, right? Run this test. What was the result? What happened? All of that was successful. Great. Let's push it into the next stage of deployment. Maybe I want to push all of that successful build tools into an artifact repository, which I'll talk about next. Maybe I want to take it and push it up into production. So when we look at Jenkins pipeline, right, it defines each of those steps. And I like to talk about all of these things in this particular section at the very end because the reality is, yes, Jenkins has been around for a long time, yes, there are a lot of plugins and a lot of ways to consume various tools and infrastructure bits, but we can also take advantage of things like Puppet, Chef, Ansible, right? Kubernetes, all of these other tools directly from within Jenkins. Jenkins is supremely flexible. I can take advantage of whatever I need to as a part of my build process. So I would encourage those of us who are operations people, right, infrastructure operators to work with the development teams to figure out what they're trying to do inside of Jenkins if you're using Jenkins or whatever that CI pipeline happens to be. So we can ensure that, hey, did you know that you can automate the provisioning of new virtual machines? Did you know that you can clone this volume using Cinder? Did you know that you can use Puppet the same way that you're deploying the production application into this test environment? And make sure that they take advantage of that. So the last thing that I'll touch on is JFrog. So JFrog is the one that I am most familiar with. That being said, they are not the only product capable of delivering what is in effect artifact repository management. So what is an artifact? In the build process, if I am compiling things, I end up with a compiled binary, right? Some object. I want to store that somewhere because sometimes those applications take a long time to compile and even though I can revert my code base in SDN or Git or whatever I happen to be using, I don't want to go through a rebuild process. So instead I take that and I shove it into an artifact repository. And now I can access each of those revisions. Now I can pick and pull from different ones as needed. So why am I bringing it up as a part of this session? Because it turns out that when I create a developer workspace, I can take advantage of a lot of different things that this artifact repository is capable of that our infrastructure can help with. But if I could clone the underlying set of storage that's providing those artifacts, mount it into a virtual machine, right? So take my cinder volume, clone it, mount it into a virtual machine that has all of the developer tools as provided by containers and give them access to the whole revision tree so that they can do their testing, right? Am I having issues with the next version of whatever library? Am I having issues with a previous version? Now we can attach that. We can give them all of those tools very, very rapidly. It's much easier, much more efficient to clone things at our level, the infrastructure level than it is at the application level. So in conclusion, I will kind of summarize this as saying be aware, be conscious, be thoughtful of, be communicative of your development teams and the capabilities that they need, right? I will warn you against, and I saw a couple of head nods earlier when I mentioned this, of shadow IT, right? The culture of no, right? We want to enable our applications. We want to enable our business, taking advantage of tools like the ones that we've talked about here on top of our infrastructure, OpenStack, is critically important to the success. I also want to remind you that none of us are in the genre of losing our jobs because we're helping the applications guys do better. Automation does not eliminate IT jobs. It makes our lives easier. It makes it so that we can focus on the things that are important to the business. One of the examples I like to use is as a virtualization administrator, who likes to provision virtual machines? It sucks. It takes four or five. It's like, oh, yeah, this is kind of cool. I can make a whole new server, and it just takes a couple of seconds. The next 10, it's, okay, that was neat. Everyone after that, you just kind of want to stab yourself in the eye, right? It's horrible. Leverage automation, get out of that business. I know I'm preaching to the choir. This is OpenStack Summit, right? So we offload much of that through NOVA, through Horizon, and through the APIs for OpenStack. For those of you who are interested in particular, so I am slightly biased towards my company, we have a great infrastructure resource, right, in what we call NetApp.io. This is the how behind leveraging things like storage with Puppet, Chef, Ansible, containers, container orchestrators, right? All of these things that we've talked about here. It's a fantastic way to interact with our team and find out what's happening as well as how you can take advantage of those resources. So finally, thank you very much for your time today. I greatly appreciate you being here. I apologize that, well, I am almost exactly on time and I know I'm the only thing standing between you all in lunch. So have a great rest of the conference. Enjoy the Staccity event tonight and thank you again.