 Thank you all for coming, so my name is Denny, I've been doing Python development for a couple of years now, mostly web development and some tooling and CLIs. I also do a lot of sysadmin DevOps stuff and I like to automate as much as I can. I work for a small Python shop, you probably already use a full-blown VM like virtual box and possibly vagrant and stuff like that. The most important thing to me is that containers are fast, so starting a container is measured in milliseconds, most of the time you can start hundreds and hundreds of containers and that's all because there's a minimal overhead because containers share the same kernel and therefore it's much easier to access resources like disk network, etc. Also images aren't copied for every container that we run from them, rather like I said containers only save disks to disk, this is due to a technology called UnionFS. So all of this should make it easy to run your whole production stack locally and you should do that because it's very important for everybody on the team, for every workstation to have the same versions of libraries and databases and everything on every workstation and this goes all the way down to C libraries. So how do we run a container? We use the Docker run command, we say with the tflag which base or parent image to use and we supply a command to run. The iflag denotes that we want to run it interactively so we don't want the process to go into the background whereas if we supply the dflag we are going to run the container in the background and notice that we didn't have to supply a command here because the command was baked into the image and it's only overridden if user supply is another command you can also use tags to have different versions of your images for example to denote versions of your database and so on. So you would use Docker PS command to list all the running containers that you have on your machine that shows you among other things the container ID which you can then supply to the logs command to get the logs that your service is writing to standard output. You can also use stop kill and start commands to restart the container, stop it or kill it. By default the stop command sends a term signal to the process inside the container but it has a default timeout after which it sends the kill signal. So that's a little bit about containers. Now we are going to talk about images. Basically you can build images manually and when I talk about images I don't mean base images like the Debian image that we saw before. I mean your images that are going to be based on a parent image like image for Postgres or a cache or something like that. So you would run a bash prompt interactively from a Debian image for instance. You would type in your commands to install your services, configure them and what not. Then you would grab the ID with the Docker PS command and then you would commit that container to a new image. So the commit command creates a new image. You can then use the Docker command to tag that image with a version perhaps or something else. You can also rename it, give it another name. So the username prefix is important for pushing to the central hub. So as I said a hub is a hosted central repository of private and public images. The images that aren't prefixed with a username are images maintained by the core team. Images done by the community are prefixed with a username and you need to, before you can push an image to the central repository to be able to share it, you need to log in to the hub. So that was a way to manually build your images. A better way is to use Docker files. So it's a small DSL that you would use to describe what files to add to the image, what commands to run and what ports to expose. Also you supply the baked in command to run if no other command is supplied via the command line. And you just use the Docker build command to point it to the directory where the Docker file is located and you name your image and it will build the image for you. So most of you are probably thinking about what do I do on a Mac because Docker requires a Linux kernel. So a Docker is, it consists of a Docker daemon which is basically just a HTTP API and the Docker CLI, the command line interface that I just showed just talks to the HTTP API. Having that in mind, there's a helper application called boot to Docker which is the official way to run Docker on Mac which basically uses virtual box and sets everything up, basically uses virtual box and sets everything up on your Mac so you can use the CLI to communicate with the Docker process running in the virtual box instance. Okay, so those were the basics of how you would use Docker to manipulate images and containers but how does that help you in your development environment? It's really important to run services and the same version of services that you run in production locally so that way you always know if something works on your workstation that it should work on production as well. So you eliminate a lot of bugs, not just between workstation and production but between two workstations. So what we have here is an instance of running an example of running a Postgres database. So we exposed default port that the container exposes and we map it to the same port on the host so we can connect to it from the host. If we didn't supply a host port, Docker would generate one for us and configure all the port forwarding and whatnot. Also, containers themselves are ephemal which means that when you start a container, you write some files, then you stop it and run another container from that image. The changes that you did in the last container are gone. This is because the image itself is immutable and you just started the new container from point zero. This is troublesome if we want to run a database because we want to save our tables and our database data and the way to deal with this is to use volumes and we tell it to mount the host directory inside the container in the place where the database is going to write the data. So every time we run another container, we will still have our database data retained. There's another slightly more portable way which is to have one container for your data which doesn't even actually have to run for this to work. So one container for your data that exposes a volume and then we use the volumes from flag to mount that folder, that data folder inside the running container that actually runs the Postgres process. I like the host version more for my development environment at least because I like to run the clean command a lot so I delete most of the stopped containers and this way I don't have to worry about deleting my test database data. So I've shown examples of how to run services in a container but what's the benefit of running your web app in a container? So basically you simplify the runtime a lot. You make sure that everyone on the team uses the same version of all the dependencies. This is something that a virtual end should be for in the Python world at least but this isn't true for C libraries. As everybody knows that's ever tried to compile, install the Python imaging library so you mostly saw warnings at the end that told you that some image formats aren't supported and basically this eliminates that problem. So you don't have stuff working on one workstation and not working on another because of missing C libraries. Also an added benefit is that you can use links. Links is a feature that Docker provides for letting containers talk to each other in a way so you would start, you would name and start your Postgres container and you would use the link flag to link that Postgres container with this name inside your web app container which basically allows the Django app container to be able to communicate with the Postgres container via the host name provided here. Also it exports a lot of environment variables, ports and IP addresses and what not so you can use that to bake those environment variables in your Django settings file or what not. So how do you automate all of this? Because presumably not everyone on the team needs to know how to run Docker. I use make files a lot so I built a lot of targets or make commands for running each service so just a one-liner command to run a Postgres database or a queue or what not. And I also have commands that can bring up the whole development environment for that particular project. But sometimes best scripts and make files aren't enough. So remember I said that the Docker daemon is just an HTTP API so there's a project called Docker Pi, it's the official Python wrapper for that API. It's available on PyPI and it lets you do I think all of the things that you can do on the command line interface and there are other wrappers in other languages but to my knowledge this is the most complete one. I could be wrong but I think it's so. So it lets you, you could write in pure Python your whole script for bringing up your whole development environment. So but what if you use Ansible or Chef or Puppet or any other of the provisioning tools? You can basically use both of them together. So an Ansible example would be that you could use a Docker file to just bootstrap the environment that Ansible needs to run and then run your playbook to provision a database inside that image. Another option is to have Ansible or Puppet or Chef connect to the local IP address of the container but this is bad because for that to work you need to have SSHD or another agent running inside the container for the provisioning tool to be able to communicate with it and for that to work you also need some kind of supervisor which can be upstart or a system D or supervisor D which would allow you to run multiple processes inside the same container. This isn't considered best practice so Docker advises you to run one process per container so it's a lot easier to update just one component of your system or just not updated but possibly change it for another component. So for instance if you had SSHD in there, if there is an SL vulnerability you basically need to update that whole container which you might have which you possibly could avoid if you didn't have that process running in there. Also there's another project called FIG. The best way to describe FIG is as vagrant for Docker so what vagrant does for virtual box in my mind FIG does for Docker you can describe you use YAML syntax and you can describe how to build your images, what ports to expose to which container to link your web app container. It also supports automatically downloading images from the central hub and with one command you can basically have a swarm of containers running without the rest of the team needing to know how the internals of how this works. Also I read like an hour ago that FIG is becoming part of Docker so there's probably going to be some interesting development there. It was announced on their blog. So in summary I hope I've convinced you that you should run your production stack in development environment because Docker makes it easy. Not all of the team need to be able to install all these services on their workstations. You use the same environment everywhere, not just workstation versus production but on all of the workstations and it should make your upgrade process more easy because you can easily easily rip out one part and change it with like an updated version or a different process and this all happens transparently to your team. They don't need to know that a new image has been pulled from the central repository. It just works. Also I didn't talk about deployment in this talk because I don't have enough time but using Docker in this way really simplifies deployment because it enables you to bring in new team members on a project more easily so imagine that you or your DevOps person have set up the production using all of these images and all you need to do to be able to bring a new person up to speed well set up their development environment is supply them with the images that you're already running in production and they don't need to do any other setup. That's it. Thank you. We do have a few minutes for questions and there's the microphone at the back if you want to walk up to it. Hi, thank you. As you're doing Django development, I was wondering do you include the Django application code inside your Docker images or do you keep it separated? So it depends on the project. Some projects aren't run inside a container because they're simple by themselves. We just use the supporting services in containers like a database, a queue but other projects have a Docker file committed in the repo and on deploy you just rebuild that Docker file to have the latest version of your source repository inside an image. Okay, so you include the code inside the Docker image? Yeah, you use the add command in the Docker file to include the directory with the code inside the Docker image. Great, thank you. I have got one question which is probably very stupid. How do you edit code inside a repository inside Docker container? So you would use volumes for that so you can mount your working directory inside a running container in a known path so when you edit the code on your local machine it's automatically visible inside the runtime inside the Docker container. That makes perfect sense. Another question usually you can't use the central repository for anything other than public stuff. What do you do with your own builds or with your own applications? Can you somehow put that into a custom or your private own private central repository? So the hub does allow for private repositories but it's not a free feature. But there's an open source component that's called the registry which you can install which is which is Python powered. You can install on your server and you can use that for pushing your private images. Also sometimes if it's a small image maybe you don't need to have it deployed on a central hub but you can just rebuild it and deploy. It depends on your use case. Yes the problem is for example with European regulations you wouldn't be allowed to trust the Docker central repository with quite a few of what you would want to put there. Thank you. Thank you. Hi quick question. Do you have any tips about automated deployment from continuous integration? Like you have always your master branch which is deployable and then you deploy automatically from to production from that master and do you have any tips like build images or not build images just do it on the fly on the servers? What's the best practice for that? Because I'm struggling with that right now. How to do it like perfectly? I'm not sure there is a best practice yet. So how do you do it? Because there's a lot of solutions that don't necessarily work the same way in the sense that you can if you get tired of one that you can use another. That's something that the Docker team is trying to fix I think with an introduction with a new library. But the most simplest way I think that you can accomplish that is basically just have a deploy trigger on GitHub or whatever if the build passes to trigger the server to just rebuild the new image with the new tags or the latest version. And then you switch your load balancing software or proxy software like nginx or whatever to point to that new image. That way if it doesn't work you can just switch back to the old one. Okay so spawning new one new Docker container if it's fine then okay persistive and the other ones like going down. Cool thanks. Thank you. I think that's unfortunately all the question time we have but you can find Denny outside if you want to ask him. If we can just thank Denny again.