 Good Morning! Can everybody hear me okay? Must be okay right at the back. I hope you've all had a very good conference so far. I'm really thrilled to be here to talk to you all today. And just before I get started, I'd like to thank Joe and Sam for all the hard work they've done so far in organising a great conference. So. My name is Dave Ward. I am the Head of Development at Globe Online Limited. We are an e-commerce company and some of the brands that we own, you may have heard of. If you're into home and garden furniture, world stores, if you're a parent, you might have bought goods from Kiddiecare. If you're into getting some really good deals on luxury furniture, a Sheikah, you might have heard of it as well. We're now part of the Dunelm family who also sell home and garden furniture but have much more bricks and mortar presence than us. We've typically been purely e-commerce online. We're really proud of the fact that a lot of our... In fact, our entire delivery pipeline, the tech for that is developed. It's maintained and actively developed completely in-house. All of our products are proprietary solutions. Over the past four or five years, we've really grown up a lot as a company. About four or five years ago, we had very much of a start-up vibe to us. There weren't huge amounts of good practices, but it's been fantastic to be part of the journey. We've really evolved over that time. Some of the things that I've really enjoyed doing is actually implementing change. We've been really lucky that we've been allowed to do this. There are some companies out there where they're not particularly agile. It takes a really long time to actually see any positive benefits from change. Some of the things we've done to improve efficiency, whether it's small things like introducing composer dependencies, which was something we did four or five years ago, we didn't have that, whether it's something which the company needs to buy in a bit more, like changing our methodology from what was very waterfall to now a really nice agile way of working. Two of the problems that we've typically had and took a bit longer to solve were in the area of development environments. I remember when I first joined as a developer, the development environment took me about a week to get set up. You guys may or may not have experiences with that, but we were using a custom-compiled version of PHP. It was a complete nightmare. When we did actually manage to get it set up and running, there was nothing like what we were running on production. I had no guarantee that the services that I'd set up were the same versions. We had a lot of problems with developing code and actually getting it out into the production environments, and then suddenly coming across a completely random bug, which you couldn't replicate on your local environment, and you had no anticipation of. So that was one of the big problems that we had. The other main problem was to do with the management of those production environments. We've got a great team of developers in WorldSource, some of whom are here right now, some have elected not to come and listen to me. What we've really kind of lacked is that same kind of resource on the system admin side of things. So we've had a few attempts at producing well-managed production environments, some with Chef, some with Puppet, and whilst they all start out pretty well, whichever implementation we went for, because of the lack of resource, we usually hit points at which we'd kind of throw away all of the rule sets that we'd developed for those, and we'd end up building exceptions into processes, and sooner or later, our servers were all running their own different versions of Puppet and different scripts, and we pretty much had to take it off all of our servers, and we kept going back to square one. So these were a couple of the problems that Docker solved for us. So I'm not going to go through my mind's dump of Docker benefits which we've seen, but some of the really beneficial parts are the dev environments. So as I was talking earlier, the dev environments are now pretty trivial to set up. It's a case of three commands, and actually we know that when a developer is actually developing its code, they have the exact same versions of those services running on their local machine as the ones that we're running in production. With the same setup for all of our different projects, it actually, for me, makes my life a lot easier as well. So because the setup is the same, we can actually move developers to different projects very, very easily now. So one of the things we've done, we've changed from a huge monolithic application running our e-commerce platform itself, and we've started breaking that up into microservices. And with the identical setup, all of these little microservice teams, they can be ramped up and ramped down really, really quickly. Developers don't take any time at all to onboard on those projects. So test sites as well, so actually all of the environments that we have, those environments are now completely identical to what they're developing on, which is great. As a result, our releases into production have gone from this to clicking a button and walking away and having a coffee. So that's been really great to see. Rollbacks. We haven't actually had to do any rollbacks yet. We've now been using Docker in production for just over a year. We were using it in development before that. But actually of the eight different services applications that we have running in Docker in production, we haven't had to do any rollbacks. But if we had, because we're actually creating mutable images, those rollbacks are really, really stable if they were to happen. Scaling is very easy. And actually, right down to the end, it just improves the efficiency in a number of different areas for us across the board. So why this talk? Over the past three years, I've been attending many conferences where there have been Docker talks. Most of them have been introductions to Docker. And pretty much in every single one of them, the speaker stands up here and he asks the same three questions. So the first question being who here has heard of Docker. I'm not going to ask you these questions. Who here has heard of Docker? And at that point, kind of 95% of people stick their hands up. Who uses Docker in development? At which point, maybe about 50% of the room keep their hands up. And finally, last question, who uses Docker in production? And over the three years, I've seen the number of hands actually increase for the people who use Docker in development. But for some reason, people are still not using it in production. So this was a leap which we took. And there's not really a lot to it. So this talk is going to try and give you a good picture of what it takes to run Docker in production. And to give everyone the confidence to do that as well. So an interesting stat I came across, it's a little bit old, it's about six months old. But Docker adoption, this is in production, is actually up quite a lot. So 30% growth in the year ending May 2016. What is interesting about this is that actually the stats here, it shows big increase, but that increase is largely driven by the large companies. So Datadog have, they've found that the more hosts you have with them, the more likely you are to start playing around with Docker and the more likely you are to get it out into production. So actually it's not the big companies that I'm interested in. I'm more interested in getting the smaller teams, the smaller companies in translating their projects over to Docker and deploying those in production. So what is Docker? I am assuming that most of you all know what Docker is. I'm not going to go into it. Just a little straw poll. Everyone who doesn't know what Docker is stand up. Great. So everyone should know then all about Docker files, about the differences between images and containers. Possibly be familiar with the native orchestration of Docker, which is Docker compose, Docker machine, all of those kind of core things about Docker. So I won't go into that. If you want to find out more about that, there are loads of tutorials, loads of YouTube videos online. I would recommend this blog from a friend of mine. Mike, he's produced a great blog on actually dockerising kind of your first PHP application, actually using PHP FPM and Nginx and get something up and running really quickly. He's also recently updated that to make use of Docker compose v2. So, if you were to look at that blog, you would end up in a position whereby you have what I call a development Docker image. So usually these Docker images are based from trusted images, and that's important. Trusted images are great to use. You can actually trust them. They haven't got any nasty malicious code in them. They are constantly fixed with the latest security patches. That's great. Often you'll mount your code into that image. You may then use the Docker commit command to actually create a snapshot of that image in time, of that container to an image. You might then, with that image, push it to an image repository. You're not really going to have thought about your configuration for it. You might not even be using Docker compose for that either. You might just set it up manually with a series of Docker run commands. So what we generally see when we're using Docker for development purposes is we'll have our project, which will exist in a git repository, and we will pull that project. I'll show you an example of this later. But we'll pull that project down. Within that, there will be a compose file, which we will then run a Docker compose up. That may then get more images from Cloud Registry, or it might choose to build an image from a set of Docker file instructions. You'll end up with a set of images. We'll run those, and then we will mount our PHP code into the containers that we can actively develop on it. We can iterate between developments. If we have any dependencies once the container has started, we will actually run our composer install, or our bar install, or whatever dependency you might be using. And environment variables and any secrets you might have, so API keys, that kind of stuff, you'll probably just start tinkering around with them inside the actual container. So, these images, they're really good to get up and running. They allow developers to start working from the same page, so they all have the same consistent environment. It doesn't matter what kind of platform they're on, whether they're using Linux, Mac, Windows, although Windows is a bit harder to get set up on. But they can all end up with the same environment. And because we've mounted our code in, we can actually use our IDE for development on these. The problem with them is they are just not suitable for deploying into production. With a code, you might have ended committing those and pushing your image to a Docker repository. Docker Hub, or private repository of your choice. But actually, when you see that image, there's no traceability of what's gone into it. You don't really have any idea what anyone's done to create that image. So there's no transparency there. Often they're environment-specific, and to get them maybe running on a production site, you might have hardcoded the config variables into a file and committed that to another image layer, which you've then pushed up and deployed that to production. But now that image is only specific for your production release. You can't deploy that same image to a staging environment or a testing environment. Essentially, the images, they're not immutable. So what do we want from our production images? We want them to be immutable, and we want them to be ephemeral. Now, these are kind of two fancy-pants words for things that are actually quite easy to understand. So, by mutable, we mean that they are unchanging over time. So, for our Docker images, we want to be able to ramp up servers, ramp up nodes, actually scale the number of services that we have at any point in time. And we want the behaviour of our image to be the same every time we do that. The difference between kind of what we had before Docker and what we've got with Docker is that... So, we were using a deployment tool called Rocketeer before that, which is based on Capistrano. And before that, we were using kind of Git pools on our production servers. And before that, we were using file transfer protocol. That was a long time ago. So, with all of those methods, what you're doing on each of your environments is you're getting your deployment tool to do a Git pool, to do a composer install, to actually go through all the instructions you've given it, and you're hoping that the code that you get out of it is the same as what you've got on your QA site and what you've got on your staging site. And largely, that works pretty well. But the great thing about the Docker images is that the actual image you've got has got the exact... It's got the identical files that you're deploying on each environment. I was thinking about an analogy for this. My mum makes great cookies, really good chocolate chip cookies. And it's almost the same as her giving me the recipe for those cookies and me following those set of instructions step-by-step. But I'm actually using very slightly different ingredients. It's not the exact same sugar that she's been using to create her cookies. And even though I might be as good a... Well, I'm a terrible cook actually, especially for desserts. But even if I am as good as her technically, I won't end up with the exact same cookie as the ones that she's kind of baked for me. I don't know if that's a rubbish analogy or not. So moving on to ephemeral, so for Docker, we want to make sure that all of our images are prepared to be short-lived. So what this means is that the Docker images, the containers that we create from them, we've got to expect those to go down at any time. So we need to make sure that they're stateless. If they contain any kind of state, then when those containers are destroyed, we're going to lose that. So these are two things that we want from our production images, and I'm going to show you how to do that, hopefully. So for production-ready artifacts, here are some of the things that we need to do with them. I strongly recommend automated builds, which I'll talk about. We've got to take care of our application code. We have to take care of the dependencies, and we have to make them environment-capable. So what we'll end up with is something like this. We'll have our Git repository with our project. From that, when we trigger an automated build, that automated build will go and create an image, which has got the PHP code. It's also got the dependencies installed into it, and we'll be taking care of our environment variables as well. That image will get built on your Docker registry, and you'll end up with an image there. And then when we run it in production, we just run an instance of that image, and we pass in our environment and our secrets at runtime. So before we get started on that, I'd just like to posit a proposed repository structure to everyone. There's no real set way of doing this, but this is how I've found to be the clearest and easiest way of structuring your repository. Before, well, usually, your repositories consist of your application code at the root level, and now you should be thinking that your repository is one level up. So, actually, your entire project environment, all of the services, all of that is now under version control. So within the app code, I think it's great that that now just contains everything that you used to have at your root level. It doesn't actually contain any Docker files at all. And, you know, if someone is adverse to using Docker, they could just take the contents of the app code and fire it up in whatever environment they wanted. That's never actually happened. But it also gives developers a really clear place of where their development should largely be, and when they're submitting pull requests for code review, you know, it's very easy to see that, you know, if they're making changes to other files there, they've got to have a pretty good reason for doing that. So it really helps with clarity. The app data directory that holds a Docker file, which ends up creating a data-only container of the app code, and that's data-only container. You know, if you're using PHP, FPM and Nginx, you might just be using Apache, but that then shares the application code to your Nginx and PHP, FPM servers. We have our dockerfile.build, which I'll talk about a little bit later, but essentially that's the dockerfile that we're going to use to build our production image. The reason why it's sitting out of any of the directories is it needs context to the app code as well as PHP, FPM. If it was stuck inside one of those, when the automated build took place, it wouldn't really have visibility of anything outside of the directory that it's in. We also have the dockercompose standard YAML file and the override YAML file. So when you type in dockercompose up-d, those are the two files which will be used to start up your containers. And then we also have a prod site YAML, and that is there to simulate what we do on production. So depending on where you deploy your dockerized application, they might have slightly different ways of orchestrating. You might be using Docker datacenter and you might be using Docker swarm. You might be using Kubernetes. You might be using MISOS. You might be using AWS. For example, with AWS, which is who we use, they have the concept of task definitions which provides the orchestration. So if we ever just want to check to check what has happened on production, we use this prod site YAML to fire up a kind of quasi-production simulation. So the other great thing that this repository structure gives you is the ability to get up and running in just three commands. You've probably all seen this before, but this is literally it for developers to get up and running. They should only have to type in three commands into their console. So we've got the git clone. We've got the change directory to the app and then we've got the dockercompose up. So it's really, really quick, really simple. And the same for every project. So just talking about automated builds for a second. These, for us, they build our deployment artifacts. We can set them on automatic or manual triggers. So whenever we make a push to our master branch or develop branch, that actually automatically builds tagged images under latest and stable for master. Also, whenever we tag an image with a number, that also triggers a new build and it tags the image appropriately. They're great if you have any errors in your builds. They'll tell you about that. The important thing about these are that they give you the transparency behind that docker image that I was talking about that was lacking from development images. So because they're completely automated, you can completely, you can understand what's gone into that image completely. There's no other way of injecting sneaky bits of code or anything like that. So you've got a complete blueprint for what's gone into that image. Some other cool things, repository links. So if you have one image, which is based on another, then actually if one image gets a push to it, it can trigger an automated build in a different repository. And we use the webhooks aspect of this as well to send us notifications to our Slack channel. So if something goes wrong with an image, we have a shim running, and the automated build, if something goes wrong or if the build's succeeded, it'll send a quick message up to the shim and that will then notify Slack for us. So that's really cool. So this is an example of an automated build. So this is how we set up most of our images. And actually when a build is triggered, really you can replicate it in two or three commands. I've stuck in an extra couple of commands there to show what's happening on the develop tag. So if I were to do a push on the develop branch on this project, because that trigger is set up, essentially it'll do a git clone of my repository from git. It'll change directory to that. Because it's developed, it's going to build the latest tag. So it'll check that out, build the image, and then push that to my Docker repository. So that's kind of a simulation of it, and that's what's happening. So yeah, advantages talked about the transparency that it gives you. And the great thing about it as well is that the image repository is kept up to date completely with any code changes that are pushed. So application code. We've seen in development what kind of happens is we check out the repository, we start the container, and when we start the container, we mount the code into it. There are a couple of other ways of doing this as well. I've seen some people who have started the container, they've got their code copied into their container, and every time they want to develop and see something, then they have to restart the container, and that kind of reloads the code into it. For us, we like using IDEs, as I'm sure most of you do, for development. So by mounting the code in, that still allows us to use our IDEs with Docker in development, and still allows us to see all of the changes in real time. So in production, as we've seen, what we're going to end up doing is when that image is built now instead, we're going to copy the code in. So that's pretty simple. So I'm just going to show a quick demo of this. If I can. OK. So I've got a very, very simple application. It's going to be your kind of standard hello world. It's not going to have any dependencies yet. I'm going to clone that from my Git repository. I'm going to change directory to it, and I'm going to check out an earlier tagged version of it. So if we have a quick look at what's in here, I need to make that bigger. OK. Hopefully you guys can see this. So in the app code, really, really simple. We've got an index.php file. We've got our application data. So this is going to create our data in your container with that index.php file. We've got our engine X container. So this is kind of a basic setup of engine X. And we've got a really, really simple php ffbm file, which is from the trusted php image. So when we do a docker compose up, we're going to see our development workflow. So that'll take the instructions. It'll fire up our application. It'll fire up our application. We can see that it's running on port 8080. So if we now go to port 8080 here, we've got hello phpuk. And if we go back to that, and if we actually make some changes, we can see that it reflects in real time. So that's great for development. In production, what's going to happen is that the original php docker file is all it's doing. It's getting the php image, and it's setting London to be the local time zone there. Really, really simple. All we're doing, very simply, is just for our dockerfile.build, our production image. We're copying the app code into that. And I've got a lot of things going on and I've got this running on an AWS cluster. So I think this is running that very first image. And you can kind of see here if we get the right URL for that. You can see this because this has got the code in. It's running up there. It doesn't have any of those changes that I've just made to it, of course. And that code is now set in stone within that image tag. It's never going to change. The great thing about docker as well with the ability to HA it very easily is that it makes it pretty much impossible for developers to go into production systems and tinker around with code as if they were doing on production sites, which is a really bad practice. So this method actually makes it pretty much impossible for them to do that. Okay, so that's the application code. Copy it into the image, get that image built. That's now part of the way there. So the next thing we need to do is sort out dependencies. So a lot of the time we use Composer to get third-party libraries in. We use Bar as well. You might want to compile CSS using SAS, something like that. So typically, when we use this in development, we clone our repository, and after we run the container, we then install the dependencies. So because we're mounting our actual PHP code in afterwards so that we can use our IDE as well, if we were to do this as part of the image build, we'd actually overwrite the vendor directory at that point, and we'd kind of lose all our dependencies if we did it at an earlier time. So here are some of the ways we might install the dependencies post container run. You might use the official Composer installation image. For that, I wouldn't recommend it because that image itself doesn't have the platform requirements that your PHP FPM container might have. So generally, you have to run that with the ignore platform rex flag on. Another way of doing it is actually installing Composer as part of your PHP FPM base image, and then using that to execute the Composer installation. At that point, if you do have any platform requirements that are missed, that will kind of stop your install there and it will alert you at that point. Both of those are manual ways of doing it, so for us, we put this as an entry point script. So I'll show you what we do there. In production, it is a lot easier. We copy our application code into the container and then we run the Composer install as part of the image build. So that all goes into the container in production. Okay, so showing this off. Some of you might have been to Lewis Tribucci's talk about JWTs earlier, just one talk before. So this is using his library to create a JWT. You can see here that we don't have a vendor directory at present, but what we do have. So you can see now that we're installing Git and Composer now as part of the image. Instead of just allowing PHP FPM to do its thing, which is to start up its PHP FPM process, we're going to get it to execute an entry point script. That entry point script is going to do something really simple for us. It's going to take care of installing Composer and it will then start the PHP FPM process. So the great thing about this, again, is that it's still just those three commands that we use to get the environment started. So if I kill everything and start everything up again, what we'll see. If I take a look at the logs of the PHP FPM container, you can see it's cloning the appropriate third-party library that we had in our ComposerLoc file and it's installing that in. That's having mounted the application code in. So actually we can see the vendor directory is there now. And if we have a look at our development site, we can see we've got a JWT there. Again, when you actually create that, so here at this point, 1.1.0, when that's actually pushed to Docker Hub, that then creates our automated build and so you can see we've got our 1.1.0 tag there, so the 1.0.0 had the hello PHP UK and this build's now got our third-party library. So if we deploy that, go to the service, deploy our new image and we should see that once that's deployed, we've now got a single image which has got all the dependencies installed into it and that's now a great and mutable image. I'll come back to that because it might... So we've got a pending task now. This one will fire up when everything's okay, it'll stop the previous task and we'll be able to see on our production environment the new JWT. So one of the things that... one of the slight stumbling blocks we've had, we use SATIS to keep our own proprietary dependencies private and obviously for those, you need to have your SSH keys available to download and install them. There are a bunch of strategies that we've tried for this. So what we've settled on is having a company-wide base image for WellSource PHP FPM. Within that, we can put our deployment keys for our private images, we can rotate those deployment keys whenever we feel we need to and whenever we have an application, that will use the base image. So by doing that, the developers who've got their Bitbucket accounts and their Docker Hub accounts with those two things, they can actually use our private dependencies and have that still installed all automatically with those three commands. So I've just talked a bit about the base image. If you haven't tried using a base image, even if it's just a really simple one to begin with, I would really recommend it. Some of the benefits you'll get from that are making service upgrades really trivial across all of your applications. So for us, we have eight applications, eight microservices now running in production. They are using our base image. That base image uses PHP 7.0 at the moment, and also recently upgraded to PHP 7.1. So all you have to do at that point is upgrade your base image to PHP 7.1, create a new tagged version of that, and then when all the applications are ready to upgrade, you can just upgrade their Senva image number, which is really nice and easy. Other things we have in the base image, I mentioned we have deployment keys, we also install things that all of our applications will need. So composer, for example, is installed into that base image. The final thing to really get right is your configuration and your secrets. So this is something that people tend to leave to last, I find, really. It's really important to be able to create an image that you're going to use on your staging environment and your production environment and possibly test environments. It's really important to have your configuration completely sorted so that that image can be the same image that's used for all of those. Some solutions here. So the first one is the most simple one, which I'd really advocate against not doing. So that's where you literally bake it into the image, and what you're going to end up with there is an image which is just specific for your one environment. So the way that most people do this right now is to use environment variables. This is part of the 12-factor app, which provides really good best practices for modern-day applications. By holding your configuration parameters there, you can change those, depending on which environment you're actually starting up. Other ways of doing this, so you can actually use volume mounts and you could create a file which has got all of your configuration and your secrets parameters in it, and then you can mount that into your containers when you run them. You could use a secret store, so you can actually use the third-party provider, which you'll then submit an API request to actually retrieve your secrets for you. Some of the reasons why people don't like the environment variables and actually some of these other solutions is that it's not secure enough for them. For example, with environment variables, those are actually exposed to the entire container. So something might go wrong and you might end up logging your entire environment, which will then expose all of your secrets in your logs, which you don't want to do. Those environment variables are also visible to other containers that are linked to the container that you're running. Some of the other methods, well, the other method that I put up there is orchestration-specific solutions, so MISOS and Kubernetes, they've got their own solution. But by hopping onto that, you're kind of, and it's possibly not a bad thing, but you're kind of locking yourself into that orchestration tool. For us, we use environment variables. This is the MySQL trusted image. At the moment, if it's good enough for them, it's good enough for us, too. However, that is going to change soon. So this is an example of one of our symphony applications. So this is our parameters YAML file, and when we run this image, we actually pass in the database host, the database name. I've got a screenshot of that. So on the left we've got development, so you can see with MySQL we're actually building that image from scratch. We are setting some very simplistic passwords for local use, and those local variables get set in environment variables for the PHP FPM image. I remembered to blank out the passwords earlier this morning, but this is our task definition for production. So you can see here, we've got a bunch of environment variables that she used. We've got our database host, database name. All we have to do is slightly alter that environment setup, whether it's on staging, whether it's on production, and we can use the exact same image on both environments. Something which is worth talking about, which I haven't had a chance to use yet, is Docker Secrets. So this is something that's recently, a couple of weeks ago, been announced. It's part of Docker 113. At the moment, it's only available to swarm services, but the idea is that we're going to be able to use this for all of our secrets management going forwards. So things like usernames, passwords, SSH keys, anything you want, you're going to be able to use the Docker Secrets service. More I can tell because I've not actually used it yet. It's going to work a little bit like this. You will set your secret by echoing it into the Docker Secrets command. So here, we've created a DB password, and then we're going to grant a service access to that secret. By doing that, so when we specify the dash dash secrets, DB password, it's going to take the decrypted secret from the Swab manager, from the Wrathlog there, and it's going to mount it onto an in-memory file running on the container. So that file will be under run forward slash secrets, and my SQL are actually already changing their image to support this, but what that will mean is instead of actually specifying your secret as part of the environment variable, you can just specify the path to that environment variable, and then the application will take that path, and it will take the secret from the in-memory file, and away you go. So there's all kinds of information about this available at docs.docker, that link at the bottom there. If you want to start preparing your images for that, all you need to do is make sure that for any of your parameters or secrets that you want to use this, you just need to make sure that they can read values from a file instead of just having it straight up. So I'm going to quickly... Well, I know it's nearly lunchtime. I'm going to quickly talk about a few more things that should be considered when deploying to production. So logging is really important. As we all know, it gives us an insight into what our application is doing, and logging in Docker is not an entirely simple thing. There doesn't seem to be one strategy for absolutely every situation at the moment. Because containers are ephemeral, by their nature, they will be shut down, they will be fired up at any given time. It shouldn't really be thinking about persisting that container for eternity. So if you're logging straight into the container, you're actually creating state in that container, and that state, you're going to lose. So you want to be thinking about centralising your logging, and I'll quickly outline a few ways of doing that. So the easiest way, if you want a little bit of persistence, is to store your logs in a data volume. That then means those logs are kept on the host, and you can back up your logs really easily from there. For us, we don't use that. It's not great for an elastic architecture where the hosts are scaling up and scaling down the whole time as well. But on non-production systems, where you need logs that last a bit longer, it's a pretty easy solution to get started with. The docker logging driver is something that's native to docker itself. If you've ever tried doing docker logs, you've got the default driver, which is the JSON file driver. That actually reads the standard output and the standard error generated by the containers there. So it's really easy to configure. There are lots of different log drivers that you can use. There's one for AWS, which is great. There are ones for Splunk, for many, many other logging applications. So the great thing about this is it is quick and easy as well. If you haven't set up your application with custom logs, then this would completely suffice the solution. Application logging. So you're probably all familiar with monologue. Probably use it as well, pretty frequently. So if you're using that and if you're using that to send your logs from your application to a centralized place, it might be loggly, it might be something else, then actually you can just keep using that. That means your log files are actually kept off your containers in a separate place. We started doing this ourselves, but we actually found that there was quite a large performance overhead when you weren't just using the logging framework locally. So when we were actually sending logs to our logging service provider, it actually ended up taking the bulk of our request time, so we've stopped using that solution ourselves. But that does give you, the developer, a really high degree of control over the logging implementation. A couple of other solutions. You can have a dedicated logging container running, so you end up forwarding your logs from your application containers into that logging container, and that logging container's only responsibility then is to centralize logs. So that logging container can then take the performance overhead of your other applications and do something with those. So whether it's stick it into CloudWatch or stick it into Logly, that container can take the hit on those. The great thing about this is that logging now becomes part of your actual architecture. The downside to this is that because you're using one dedicated logging container for all of your applications, actually if you want more fine-grained control over it, it means that you have to set up your logging container to be aware of many different types of customized logs, which isn't great. So, the other final solution that you might come across is logging by a side card. So this is pretty similar to what we've just talked about. But each application container is paired with a dedicated logging container here. So this then gives you that flexibility. But it is more difficult to set up, and you kind of really need to get your head around that one. Okay, other processes. So quite often you'll want to run more than one process in a container, and this is a problem, well, a problem which we've come across, whereby an application might need to run a cron or might need to run some workers. So in order to do that, we use a process manager. So we use Supervisor D for that. And to get set up with this, it's actually really easy and you can keep your repository really clean as well. So just to give you an example of this, actually let me... So here, all we've done, this is a docker file. All we've done is we've installed Supervisor and cron as part of our services. We've got our base Supervisor configuration. We can copy that over. And that Supervisor configuration, it'll look for any star.conf files within a certain directory. So underneath the com.d directory, if we want to add workers or if we want to add our cron, that's where we would put the configuration for that. Supervisor would then be in charge of starting out the PHPFPM process, be in charge of keeping that running, and if a cron is enabled, it'll be in charge of keeping that running too, and any workers. And it also gives you the flexibility to start as many different processes of those as you want as well. Container monitoring. So this is the last topic that I'll talk about. But when you've got stuff running in production to complete your picture of what's going on, now that you've got your logging sorted as well, you want to actually have eyes on all of your different containers. You want to know what CPU usage there is. You want to know if any of them are maxing out on memory. You want to know network stats, bunch of stuff that you want to know. So it's the same kind of metrics that you'd be interested in. But the solutions for this are not necessarily the standard solutions that you would use for non-dockerised applications. Because containers can be fired up very individually, you can have lots of containers running on one host, it's actually your typical ways of monitoring this aren't available to you. Fortunately, there are a bunch of services out there which are easy to set up and give you these kind of metrics. So we use New Relic for our container monitoring. Once you actually get the eyes on what your containers are doing, you can then start to tune, for example, the amount of memory that a specific container is able to use. You can actually start to tune how many services you want to scale up at any one time. So that's kind of why monitoring is really important there. Some common mistakes that people make. So I've talked about creating images from running containers, doing your docker commit and pushing that up to the repository. Just don't do that. Deploying with the latest tag to production, this isn't a great idea because that latest tag could change at any point in time. You might want to do that for your staging environment. But the latest tag is the default tag as well that's created. So you just need to be aware of that. We've talked about secrets. I've got concrete images twice there. But doing too much in your runSH as well. So like the composer install. Whilst that's okay in development, you want the startup of your image when you run it to a container. You want that to be as fast as possible. If you're doing things like composer installs in your entry point scripts, then that's actually going to take maybe five minutes to get all the dependencies. Those dependencies might not be available at the time you're running the container as well. And it'll just mean your deployments take forever to get out. Relying on IP addresses in your configuration, that's never a good idea with HA. So just to finish off, the deployment process now becomes very, very simple. You've got your immutable image in your Docker registry. That's been tested on staging. Everything's great there. And whatever orchestration tool you're using for production purposes, you would go and update that bit of orchestration to use the new image that you've now tagged. And it just becomes a question of switching the image from, let's say, 1.0.0 to 1.1.0, deploying that. And that's it. If you want to roll back, you know you can do so with confidence to your previous image, which is going to give you the exact same behaviour as it did before. And you can then start to ensure that your deployments are all zero downtime, that you've got load balancers sticking in front of them, which are draining connections from your old container, switching them over to the new container, and then getting rid of your old container. So hopefully now, you should all be in a position, you should all be in a position to create immutable and affirmable Docker images yourselves. And I'd just like to end with saying that it doesn't have to be the sys admins or the devops to do this. This is actually quite a simple process. And for us, this was completely driven by the development teams and something which you should all do to start receiving some of the benefits of it. Thank you. I think it is lunchtime, so everybody's probably starving. So if you've got any questions, I'll hang around up here for about five minutes, and then I will go and get some lunch myself. Thanks.