 One for holding till the bitter end last session before the last keynote on Friday. My name is Jason Clark. So I'm here to talk to you about real world Docker for the Rubyist. And this talks, Genesis really comes out of the fact that the company that I work for in New Relic deploys a lot of our services using Docker. And I hear a lot of hype about Docker. I hear a lot of people saying, you should use it. And then wildly diverging opinions about how to use this tool. Docker turns out to be a toolkit that gives you a lot of options that you can pick from. And so what I wanted to do was I wanted to give you a presentation that tells you a way that you can approach Docker. This is tried and true, tested stuff that we've been doing at New Relic for actually the last couple of years. We got into Docker pretty early. So we've experienced a lot of bleeding edges. And we've experienced a lot of things that have made our lives easier. So this talk is going to take the shape of a story. And this story is gonna be about two developers. Jane, who is a new New Relic employee and has a great idea for a product, a service that she wants to run. It's a, we encourage experimentation. So it's a lines of code service. It'll do metrics for how many lines of code do you have? Super useful, like we wanna let people experiment and see how that goes. And Jill, who is someone who's been at New Relic a little longer and has some experience and can help answer some questions. So as we are a public company, this is our safe harbor slide, which says, I'm not promising anything about any features. Please don't sue us. Please don't assume anything based on me making up stories about services that we might develop, okay? So we're all clear. This is a bit of fiction, but it will help us frame how we use Docker and give you a picture of ways that you might be able to apply it. So one of the first questions that Jane has as she comes in is, why does New Relic use Docker at all? What is the purpose and what's the sort of features that drove us to this technology? And one of the big components of it is the packaging that you get out of Docker. So Docker provides what are called images. And an image is basically a big binary package of files. It's essentially a file system, a snapshot of a piece of code and dependencies that it has that you can then distribute around and run. At this point, Jane's like, okay, I've heard about this. There are images, these images you can build off of them. You can take, so for instance, there's a official Ruby image that's maintained by Docker. You can use that image and then put your Ruby code into an image that you build off of it. And Jane pauses Jill at this point and is like, okay, so this is slightly confusing. I've heard about images and I've heard about containers. And what's the relationship here? And so the relationship is that an image is kind of the base of what you're going to run, it's sort of the deployable unit. Where a container, which does not really resonate for me in the term, but is the running instance of something like that. Now the way that you can think of this is if you draw an analogy to Ruby, the image would be like a class in Ruby. So this defines what's there. It defines what's possible and what's installed. And then the container is like an object instance that you have new. It's a individual running piece of that. Okay, so we've got Docker images and then those we can deploy as running containers that do our application and run our code in our staging and production environment. So, but there are lots of ways to package up your stuff. I mean, you could just shuttle files out there. You could make a tar out of it. That's not enough to tell us why we would want to use Docker. And that brings us to the other major thing that Docker brings, and that's isolation. So for most of us, we don't have our app set up in such a way that one host will be completely maxed out by an app that's on it. We may want to be able to share those resources and run multiple things across multiple machines to increase our resiliency and use our resources well. And traditionally, you might have done it in some fashion like this. You have your server, you've got different directories where you keep the different apps that are there and you deploy and run those things on that given host. Well, the problems here are pretty obvious when you look at it and see these things sitting next to each other. They're all sharing the same neighborhood. They could interfere with each other's files. They could interfere with processes that are running. They're sharing the same memory on the machine and there are lots of ways that these two applications that are running there might interfere with each other. Docker gives us a way to contain that, to kind of keep those pieces separate. Now they still use the same kernel. This is not like a VM where there's some separate operating system. But Docker provides insulation so that each of those running containers appears to itself as though it is the only thing that is in the universe. It only sees its file system, a subset of that. It has, you can put constraints on how much CPU and memory it uses. And so it minimizes the possibility of those two applications interfering with one another despite the fact that they're running on the same host. So this is a pretty attractive thing for us to be able to have sort of shared hosts that we can deploy a lot of things to very easily without having to worry about who else is in the neighborhood. All right, so clearly, you know, Jane's new developer has shown up, how do we get started? Well, Docker is a very Linux based technology and it has to be running on a Linux system with a Linux kernel. And, you know, a lot of us here don't run Linux systems directly. We run Macs or Windows. And fortunately, Docker tool kit is available. So this comes from Docker. It's the kind of sanctioned way to set up a development environment and be able to get the Docker tools installed on a non-Linux system. So once we have that, then we can get down to actually writing our own images to construct an image for our app that we want to deploy. So Jane, you know, sits down with Jill, they're pairing. And Jill has her write this in a file called Dockerfile and the root of her application. And, you know, Jane recognizes a little of this. She had done some reading about Docker that from says, what image should I start from as I'm building the image for my app? But that's all that Jill tells her to write. And she's like, well, shouldn't there be some other things? Like, this looks more like a Dockerfile that I've seen, you know, has working directories and copies and runs and a bunch of shell commands and things that are setting things up. So Jane's really confused about what's going on, you know. This is an image that we're using from New Relic that we've got. But where's the rest of the Dockerfile? And Jill says, okay, this is a fair question. But, you know, running code's awesome. Let's get your thing deployed to staging and then we'll dig into this later and look at how that very simplified Dockerfile actually provides us a lot of value and shared resources. So, having written this basic little Dockerfile, Jane goes to the terminal and writes this command line. So it says, Docker, build the minus t provides a tag for the image that we're going to construct and tells it to work in the current directory with that dot. And then once we've done that, there will be a whole bunch of output that appears at the command line as Docker goes through, takes that base image, and then runs the various pieces that are baked into that image to build out a package of your app. Now, if you have errors in your Dockerfile or you have problems, file permissions, things that go wrong, this would be the point when building that would tell you. You'll see output from those commands there. But once it's successful, if we ask Docker what images it knows about, it will give you a listing, and here we'll see our LOC service image that we built. It gave it a default tag of latest because we didn't tell it to give it a particular tag, and that image is a runnable copy of our application that we can do something with. Well, this is all well and good for Jane on her local machine, but clearly if this thing's gonna go into a staging environment, that image needs to get from her computer somewhere else. And to fill this gap, there are a variety of things that you can do called Docker registries. Now by default, Docker runs one called Docker Hub. This is what all of the Docker tools will default to if you don't specify as you push and pull images. It's where it will look for it. There are alternatives though. So at New Relic, we ran into a problem when they deprecated what version of Docker you could use, more quickly than we had moved some of our systems off of it. And so we had to go looking for some alternatives as well. One of them that we've had pretty good success with is called Key. I know it's spelled kind of funny, but that's how the word gets pronounced. And that is very similar to Docker Hub. It provides you a nice web UI. You can push and pull images. They have a paid service, so you can have those be private. And so that's been one of the major alternatives that we've gone to as we've moved off of Docker Hub. Another alternative as well is a piece of software out there called Dojistry. Now, Dojistry is a little more bare bones, but what it lets you do is it lets you store images on S3 in your own S3 buckets. And so it sort of takes that third party provider out of the picture, which can be important if you have critical deployments. If we have our deployment, depends on Docker Hub being up. If Docker Hub is down, we can't deploy our stuff. That might be a problem for you depending on your organizational structure and scale. All right, so we have an image. We have this picture of what Jane's service looks like that she wants to get running. So she wants to go get this started up out in our staging environment. And so how does she do that? Well, at New Relic, we developed a tool called Centurion. Now, typically, if you want to just run a Docker image and create a container off of that that'll start your application up, you would say Docker Run and then the image. And that image has a default command baked into it, which is what will get invoked. And then this starts running. If you run it in this fashion, it will be blocking. You'll see the output that's coming out of the container as the commands run. So you can imagine that this is something you could go out to a machine somewhere in the staging cluster and go tell it to Docker Run these containers. And that would work. But unfortunately, if your company gets to any sort of size and scale, you probably want things running on multiple hosts. And you probably have a lot of computers that are out there. And interacting with those individually is problematic. And so that was where Centurion came in. Now, this is certainly not the only way to solve this problem. And I'll briefly refer to some other possibilities later on. But when New Relic started with Docker, these things didn't exist. And so Centurion is a Ruby gem that allows you to work against multiple Docker hosts and easily push and pull and deploy your images and do rolling restarts and things like that. One of the other big powers that Centurion brings is that it is based off of configuration files. And these are things that you can then check into source control, you can version, and have a central point where you know what's deployed in your Docker environment, rather than individuals going out to boxes or starting containers that you don't know anything about. If you run everything through Centurion, you have a central record of what's actually going on. So Centurion bases these configs off of rake so that they have some amount of dynamic programming that you can do in Ruby. So you define a task for a given environment that you want to deploy to. So in this case, we've made a task for our staging environment. We tell it what Docker image we wanted to pull onto those hosts. And that allows us to have it grab the latest. You can also tell it different tags. So if you had different versions of the service and you wanted to deploy a certain one, you could do that. And then to handle that issue of having lots of hosts that we might want to start on, you can specify multiple hosts that Centurion will then go and restart or deploy these services to. So with that, it's pretty easy to get Centurion started. It's just a gem. You install it, and it installs an executable for you called Centurion, unsurprisingly. There's a number of flags that it'll take, but the basics are you tell it an environment. You tell it the project where it should find the configuration, and then you give it a command. There's a couple of different commands. We'll just give it a deploy and say, go out there, start these things. So Jane's a little nervous. I mean, she's hardly been here at all, but she asks, does this all look good? Are we ready? Yeah, let's go. She kicks the Centurion command. And what you'll see is a lot of output as it connects to the various hosts. It will go through them. It will pull all of the images down to those. So that all of the boxes that you need have the image that you're gonna then start with. And then one by one, it's going to stop any container that's running for that particular service on that box, and then start a container up for you based off of that config. So after it's connected, there's also options that'll let it hit a service status check endpoint. So you can do rolling deploys where you make sure that things are actually up and running before you start to the next host and shut things down. All right, so having done all of these things, it's been shipped. Things are in staging. Jane is able to test out her code, see that things are working swimmingly, and goes home for the day feeling very accomplished. It goes to bed, comes back the next day, and unfortunately, things were not as great as she thought. The service is not there. Where did her app go? Well, it's time for the tables to turn. Jill's gonna ask a few questions of Jane. So Jane says, well, where were you logging to? Let's start trying to figure out what happened here. And Jane looks through the code and she had kind of cribbed a line from somewhere that she wasn't really clear about, but in her production configuration for her Rails app, it looked like it was a standard practice around New Relic to have all of the logging go to standard out rather than going to files that would get written inside of the Docker container. Okay, so that being the case, this actually put Jane in a really good position because New Relic's infrastructure, where we run the Docker hosts, actually takes all of the standard out that comes out of Docker. So like we saw, when you run a container, you see what's going to standard out from that. So we're able to capture it and we actually forward it to a couple of different places. We forward it into an Elasticsearch instance, which runs Kibana, which is a fairly common sort of logging infrastructure. I've heard it referred to as the Elk stack. Elastic, something with an L log stack. Forget in Kibana. And then also we actually take that opportunity to send things to our own internal database, event database called Insights. And this lets us do analytics and querying across these logs. But you could set things up to send these logs that are coming out of your Docker containers anywhere you want. But I highly recommend that if you do use Docker and production in this way, that you do make sure that all of the logging that you can is going out of the containers and not getting written inside of them because it will give you better visibility to it, for one, by getting it out. And it'll also prevent the file system sizes from getting huge in the Docker containers themselves. All right, so they take a look at the logs. There's not really anything there. Unfortunately, you don't always hit a home run on the first go. So it's time to take a little closer look at the containers themselves. Well, that's actually something that you're able to do. And Docker provides the commands for it. So here we're specifying a minus capital H. That points us to a different host. So by default, Docker is gonna be talking to Docker running on your local machine. And so this lets us go point at our staging environment and the command way off on the in there saying PS lists the running Docker containers that are on that host. And here we see a container ID. It has a nice shot, which will be fun for us to type. But that's an identifier for the individual running container that we've got going out there. And it looks like it's still there and it's running. So what we can do from there is we can say exec rather than run and give it the container ID. The minus it sets things up to be interactive. And so this will actually give us a bash prompt onto that Docker container. Now, this depends on bash being installed on the Docker container that we're connecting to. And there's a variety of other things that could interfere with this. But we have things set up so that we can do this to do any sort of debugging that we need on those containers as they're running in our production and staging environments. All right, they look around. They see that the processes are gone. It's not exactly clear what's going on, but they eventually dig up some stuff that looks like there might have been some things happening with memory. And that tickles something in the back of Jill's brain. She remembers another project that they had and that had some similar problems where things just seem to be disappearing. Like processes would just go away with no trace that they could see. And the problem there was memory. So the lines of code service apparently is clocking in at a good 300 meg. Not totally crazy for a Rails app, but a little big. And that was the key that they needed to figure out this problem with things getting killed. So like we talked about way at the beginning, one of the key things about how Docker provides you isolation is that you can set limits on the containers for how large they can get and what memory they can consume. This prevents the individual containers from interfering with other things that are on the same host. And it turns out the 256 meg was about the limit that was being set by default if you didn't specify anything. And so as soon as you got past that, then Docker's infrastructure would kick in and it would just kill processes to free up the memory. Well, this is clearly not a good situation. And so fortunately, we allow for configuring that. So in Centurion config, you can say memory, tell it to give us two gigabytes. And what this actually correlates to is a command line flag that you can give to Docker to tell it how much memory you want. And basically any of the commands that you can give, any of the flags that you can send to Docker when you're running things to modify that environment and tell it differently how to run stuff are available through the Centurion configs. So you have a source-controlled place to do all of the changes that you might wanna do to how your containers run. All right, this is great. We've got two gigs, things stay up, they keep running. But we actually asked for a little more memory than we really needed. And Jane's like, well, we should probably eke a little bit more performance a little out of this. Even though it's in staging, it'd be nice to have a little more room. So I wanna increase the number of unicorn workers that I've got. Jill's response is to try the MVE. So Docker provides flags when you're running to let you set environment variables that will be passed along into the container. And this is actually a really fundamental part of how you should structure your Docker systems so that things get passed in from the outside. So when we say minus E, unicorn workers, once we're inside that container, it's just an environment variable like you've probably seen in many other places. For our setup, we have a fairly standardized unicorn config. And what we do is we look for the unicorn workers environment, we turn that into an integer and tell it to run the number of workers that we want. And so our Docker image can be used to scale up or down to run larger or smaller numbers of workers without us having to construct a new image that changes that configuration. As you might expect, Centurion supports this. In fact, this is one of the key features of how we use Centurion, is that we drive as much of the config out of the code and out of the file system and into the environment as we possibly can. And so you can say envars, give it a hash to give it the names of that. Now, this is not the only thing that you might wanna configure. Your database YAML file in a typical Rails app gets parsed through ERB before it's actually run. And so you can do things like this where you parameterize potentially off of the environment. So when we run in our production and staging, we can be explicit about where to go connect to our databases. But one of the niceties is since this is just Ruby code inside of the ERB braces there, we can also give you defaults. So if you're running the Rails app locally, it's gonna work, it's gonna find the things that it needs. Similarly, application-specific configs are something that we can rely on the environment as well. So in your application.rb, you can set config values and you can set these to arbitrary names, arbitrary things that you want to pass around and then those will be available throughout your Rails app. So here we take, we're looking for another service that we're gonna talk to. We set the URL and we have a default to fall back to. We set timeouts. And what this does is this gives us one central place in our Rails app that we will see all of the things that we can configure through the environment, all of the knobs and switches that you might want to control. Accessing this from other places in your code is as simple as saying rails.configuration. The accessor that you specified. So here we can get the service URL and timeout that we were talking about and use those throughout our system. Now, some of you may have heard of the 12-factor app. This is a thing that Heroku has promoted that's got a lot of principles around how to run applications well in production. This whole environment-driven thing, while it applies very strongly to Docker, it's not limited there. And this is one of the key tenets that they have with it. This is also a really good idea to drive things through the environment as well for security reasons. If you have secrets, you have passwords, you have sensitive pieces of information. If you put those into your source code or put them in files that are in your Docker images, if somebody gets ahold of that Docker image, they will be able to see what that stuff is. So if Docker Hub gets compromised or some other place does, your secrets, you don't want them baked into those images. By putting them in the environment, they're only there at runtime and someone would have to have access to the containers to be able to get at those bits of information that you don't want. All right, so this is all well and good. Jane's feeling awesome about the work that's going on, but she really wants to understand better. So that one line Docker file that we showed at the front, right? She just wrote one line to say from this image, how does that actually work? Well, it turns out at New Relic, we've put a lot of effort internally into building shared Docker images on top of other pieces of the Docker infrastructure that we've gotten from the world at large to make our lives simpler and bake in the things that are shared across our applications. So base builder was the name of the image that we grabbed from to start. And so this encodes a lot of our standard ways of approaching things at New Relic. So for one, for various historical reasons, we run mostly off of Sintos. That's what our ops folks are most comfortable with. And so we derive ourselves off of a Sintos image rather than a lot of the base Ruby images or either Alpine or Ubuntu Linux. Well, we know that this is something that people are running Ruby off of. And so one of the first things that we do in this base image is we install Ruby versions for you. Now we end up using RBEMV, RBM to do that. That's not strictly necessary because there's not gonna be version switching going on, but it just happens to be the tool that is most commonly used at New Relic for switching Ruby's. You can get a Ruby installed onto your Docker image however you would choose to. Once we have that Ruby version installed, we can start putting other things that we assume people are going to use. So for process management, we use an application called SupervisorD, so we can install that. Most of the time you're running something that's a web service or a website of some type, so we run that through Ingenix, so we put that into this base image. In fact, we can go even further. We can gem install Bundler and then rehash so that that executable for bundling is available. And this is great. We're finding all of these things that are shared between these applications and taking that duplication out, making it simpler for people to build their images. So why not just bundle install? Like get all the stuff, right? Well, here we hit a roadblock and it's pretty obvious when we try to build it what's wrong. The base builder, this is a base image. This isn't the application itself, so we don't actually know what your gem file is yet. Someone is going to build their app on top of this and so we can't go and do the bundle install when we're making the base. We don't know what's gonna get into that actual app. But fortunately, Docker provides us the tools that we need to do what we really want, which is to say, when somebody uses this image, I have commands that I want you to run and Docker's parlance for that is on build. So any Docker command that you put into your image, if you say on build before it, it will wait to run that command until after somebody has already used your image in their own Docker file. And so we can do things like wait until somebody uses this in their app and then go copy their gem file in and bundle install. So we get their copy of dependencies, but we don't have to have them write the lines to know to go bundle and do the correct things in their particular Docker file. In fact, we've pushed this approach quite a ways and provided not just standard things that everybody does, but options that people may want to choose. So Unicorn is used pretty broadly at New Relic. It tends to be the default web server, but there are people that are using Puma and like to try that out. And so what we've done is we've created scripts that allow for those sorts of configurations to be a one line thing that you can put in your application's Docker file. And all that these have to be is a script that modifies whatever configs you need to on disk to get the effect that you want. So in our case, this is just a matter of changing out the supervisor config for which app server to start up and then swapping in a Puma configuration instead of a Unicorn config into the app itself. But this is a one line thing for somebody to do in their app and be able to try out. In fact, we've even gone so far is to provide helpers for installing other things that people might want like Python. We have some teams that have background processing that runs in Ruby and then has some Python that it needs to invoke. And so we can provide simple wrappers baked into these base images to smooth the path for app developers as they do their work. All right, so it's a fun technique. It's fun to see which things you can pull out and make it so that people don't have to think about. But let's get back to some code. So Jane keeps writing her app. She's working on this lines of code service and she wanted to write a file somewhere. She just kind of picked the root directory to go write it and she's getting an error out of it. So she goes, pings Jill, Jill comes over, they take a look at it and it's a, you know, again a pretty straightforward error message but it's not totally clear why this is happening. Permission denied, she tried to put this file there. And Jill, you know, being fairly experienced knows just what the problem is. The problem's with nobody. Nobody who's nobody. Well, nobody is an identity that we have on our Linux machines that has fewer privileges than root. So it's actually a user that we run our things in inside of our containers by default. And so here, this is not super relevant in all of the details, but this is how Supervisor starts an app up and we say user is nobody. There are things at the Docker level where you can control this as well. But this kind of makes Jane a little confused because like she's heard from many different people about how Docker runs is rude and isn't that fine because the containers are isolated. And while it is okay to do that and it's sensible why Docker has chosen that as the default, it doesn't mean that you can't crank things down further. So if you are writing your own applications, you can be more defensive than Docker as itself. And by running as nobody inside of our containers, we give ourselves extra protection in case there is some exploit or some problem with Docker that would let them elevate root privileges inside of the container to the outside host. So running things in as secure of a mode as you can within the boundaries that are in your control will end up giving you a safer result in the end. All right, so Jane gets that fixed up, starts writing things in a location where she's allowed. And then maybe a little late, but she comes around to say, yeah, I wanna write some tests. Like how does Docker fit into this? Well, there are some ways that you can work with Docker to make sure that your tests are running in a realistic environment like where you're going to deploy it. The simplest most straightforward thing that you can do is you can run alternate commands against the images that you've built. So here we say Docker run against an image of our lines of code service. And we just told the bundle exec rake and it goes off and it runs the tests inside of that container. Now this presumes that all of the configuration that's necessary is there. If it needs database connections, you'd have to figure out how to feed those things in. But at base, all it needs to do is run that Ruby code inside of the container instead of running your full web application. But unfortunately this has a problem and that's the fact that this relies on the image that you built having your current code. And I don't know about you. I occasionally will edit my code while I'm working on it. And if you make a change to your tests or you make a change to your production code, you have to rebuild that image to be able to get those tests to run against the current thing that you're doing. And I don't know about you but this would make me very sad. Anything that gets in that loop of making it so I have to do something extra before I can do my tests is not a really great experience. Fortunately Docker does have some options that will let you get out of that and do things in a little different way. And that is with mounting volumes. So here we have a Docker run command. It's running against our lines of code service image. And that minus V is the important part. So what that is saying is take what's on my local host, my host where I'm running at source my app and make it so that that appears inside my container at slash test app. And so this mounts that in without rebuilding the image. And so what we actually have happening at New Relic with most of our tests is they run against the Docker image but they just mount the current code into that image rather than rebuilding it from scratch. You have to do a little directory munging to make sure you're in the right place to go run the code. But otherwise this is a very good approach to keep you from rebuilding images all the time. So life moves on. Jane's got more and more things that she's wanting to do with the service. And as often happens she maybe is looking to use sidekick to do some background processing and she needs a reticence instance and figures, oh, I need to talk to somebody to provision that or set that up. Well, it turns out that what we built with Docker allows her to kind of self serve that and have stuff deployed through the same mechanisms. So what we have is we have a image already constructed that we use internally that has reticence installed and that takes all of its configuration through environment variables. So all that Jane needs to do to get a running reticence instance into staging her production is to create this configuration and go deploy it the same way that she's been doing with her app. This is a powerful approach. Like you can do this with anything where you've got some sort of appliance, just code that you would like to be able to have people parameterize and run without tampering with it. If you build the images to drive off of the environment then people can just take that and run with it and just use it kind of out of the box. So there's a lot of talk about Docker. There's a lot of things that are going on. Centurion came out of a need that we had a couple of years ago at New Relic but there's a lot of other things that are in the ecosystem that might be of interest or something that you might want to pick up today. So one example of that is Docker Swarm. Now this comes from Docker. It is software that easily allows you to control a cluster of hosts. So the sort of staging environment that we have there. Docker Swarm is a good way to sort of bootstrap yourself into running in that type of environment. Something that we're looking at to potentially either evolve Centurion into or use to replace it is a project called Mesos. And Mesos in conjunction with a thing called Marathon allows you to have more dynamic sort of scheduling for your containers. So rather than saying I want to run this on hosts A, B, and C you would tell Mesos, I would like to run three instances of my image. Please go find somewhere to put them. And it would put them out there. And it has some really nice properties around resilience. If it drops one of those instances because something crashes, Mesos can start it back up for you automatically. You can scale things dynamically with it. A similar technology that's for this sort of container orchestration is Kubernetes from Google. And there are a lot of other things that are out there that are happening in this space. There's a lot of people working to make this a better workflow. All right. So we've come to the end of Jane and Jill's story. We've looked at how you can use Centurion to deploy and control multiple Docker hosts. We've looked at how using the environment to drive your configuration allows things to be more dynamic and controlled. We've looked at some tips and tricks around building shared images so that you can spread best practices within your organization and not repeat stuff. We've looked at some security and testing and a little peek at the future of where things might be going. I hope this has been of use to you. And hopefully you'll have good success if you choose to use Docker at your company. Thank you. So the question was where the Dockerfile lives. And yet typical practice for us is that the root of your Rails app is where the Dockerfile would live. It doesn't have to, you can put it in other locations, but that's been the simplest sort of convention that we've followed. Yeah, so the question is between Vagrant for similar sorts of workflows of testing and developing in Docker. From what I've experienced Docker startup is very fast. So if you have a pre-baked image, like the image building takes a while but actually starting a container is really quick. So it would definitely be worth looking into. I think it provides similar things to Vagrant and is a little lighter weight. That's one of the selling points there. So the question was what concrete usage do we have of this? I think at last count we had a couple hundred services that are running on this internally. It is not everything that we run. There are a number of our bigger older applications, especially the data ingest that aren't converted over to Docker. But pretty much any of our new products that have been developed in the last year or two have been deploying into our Docker cluster. So yeah, so the question was the deployment workflow is building a new image, like run your tests, build a new image and then deploy that image. And yeah, that's correct. We run things through a CI system. We happen to use Jenkins, but it's fairly up to you exactly how that flow happens. I showed a lot of us using the command line directly to do those deploys. We don't actually do that much in practice. You have a central CI server do it, but all that it's doing is calling Centurion from a little shell script. The same way that you could from your local machine. Yeah, so the question is, what do we do about things like database migrations and asset compilation? Asset compilation very often we will do it image build time. I didn't show it here, but that is a common thing for us to do in constructing the image. We have some other techniques that we're playing with for externalizing the images entirely from our Rails app, which takes that out of the picture. Database migrations, the database currently and probably for the foreseeable future does not actually live in Docker itself. And so we will tend to have another environment where we would potentially use that Docker image to go run the migrations, like use the image, run that command to go talk to the database and do those migrations. But it's not part of like the individual deploys. It's normally scheduled separately as part of things. And the questions, what about the fact that migrations might break currently running instances? That's something that we kind of have to manage ourselves at this point. It's certainly something you could build more infrastructure around. We tend to just have a very conservative cadence for when we do migrations in the apps that have those sorts of setups. So red light is on, so I'm out of time to be on the mic, but I'm happy to talk to anyone who would like to afterwards. Thank you for your attention.