 All right, everybody, thank you for coming here. Welcome to the first session, Kiwi Picon 2017. This is my first time in New Zealand, my first time at Kiwi Picon, so I want to thank the New Zealand Python users group for having me out. I want to thank all of you for sitting here and at least pretending to pay attention to me. It's awesome. This is me, Justin Crown. As you can see, I do boast the largest terminal in Los Angeles. When I was given that monitor, everyone was shocked when that's what I did with it. You can find me on GitHub. My name is Mr. Name. You can also find me on Stack Overflow under the same name. And you could follow me on Twitter, but I don't have an account, so you can't do that. I will also, as a quick side note, say that we're all lucky that we're still sitting here, because after the keynote, I was ready to smash my laptop and go move out to the middle of the woods and live by candlelight. So today, we're going to talk about Docker, more specifically, Docker Compose. And I have not always been a huge fan of Docker, to be totally honest with you. I've found it to be a little bit unnecessary sometimes, and everybody thinks it's a silver bullet that's going to solve everything. And I don't really think that's the case. But I work at a company called, oh, sorry, before I move on, this is the link to the code repository. I was hoping that everybody would be able to play along, but pulling Docker images can take a lot of bandwidth. So give it a shot if you want. It may or may not work. If you don't play along, it should still be fun, I hope. So I work for a company called SpinFX Group, and we write code to power some crazy contraptions, like these virtual reality racing pods. So as you can imagine with projects like this, you've got a lot of different teams working on it, and they're from a lot of different disciplines. They're working on a lot of different operating systems. And as a result, we need a specific workflow before we even start writing code. So you guys came here to learn about Python, and I'm not going to talk about it at all. Instead, I'm going to talk about the thing that we don't want to think about, which is our development workflow. So these kind of projects, we have requirements for our workflow. They're simple. We need it to be portable. Our API, our application, needs to be deployed on any operating system. It needs to be easy to use. We cannot rely on people to be skilled with Python. We can't ask them to fuss about with configuring their system Python and the versions. It needs to be an easy to use tool. And we cannot sacrifice the ease of use for our users. We still need to be able to iterate rapidly as we change our code. And lastly, it should be easy to set up. So to this end, in situations like this, a tool like Docker, and more specifically, Docker Compose, fits the requirements. And hence, we're going to choose it. And the objective today is that by the time we leave here, we're going to deploy a full stack Django REST framework application. We're going to have InginX in the front as a reverse proxy. We're going to use Postgres as the database server. And we're going to use a module called Django queue, which is a background task scheduler, which requires Redis as a back end. So within a half an hour, we're going to deploy a full stack with five different services. So hold up. It's the obligatory of what is Docker time, right? We all love this. And the answer is containers or something. I don't know. So before I move on, we have a lot to cover today. Just a quick show of hands. And trust me, I hate audience participation. When I'm sitting in the audience, I promise I won't ask you to sing along or anything. But I'd just like to know who uses Docker right now? Show of hands. Cool. Who uses Docker Compose? All right, cool. So I probably won't be able to show you anything new anyway. But let's do it. So this is the one key part that we need to understand about Docker for anybody that's not used to Docker. I'm not going to talk much about how Docker works. There are better talks on that from smarter people. But we need to understand the life cycle real quick. So as you can see up here, we start with a build which runs commands inside of a file and it creates an image. Then we run the image which creates a container, which is an instance of the image. If this is new to you and you need help wrapping your head around the paradigm, I think of it like an action figure. You have a mold that you create action figures from. Theoretically, every action figure should be the same. Once you go buy the action figure or steal it from your brother, if you're that kind of person, it's now yours and you own that instance of that action figure. You can paint it, you can disassemble it, reassemble it, put it away, take it back out. No matter what you do, it will never affect the next action figure that comes out of the mold. Cool? So with that in mind, let's just look at our Docker file real quick. We're starting with a Python 3.6 base image. On top of that, we're doing some voodoo so that we don't buffer the output so that we can see output as it comes out of the containers. We're copying everything from our current working directory into slash source of the container. This is overkill. We don't need everything in the current working directory, but I'm trying to keep things simple. So I'm not saying this is the way to do it, but it's what I'm doing. Workter says that this is now the root of every command that we run. So we're gonna go ahead and install our requirements since there the requirements.txt is now in the source directory and then we're going to expose port 8,000 inside the container. But you need more than just an image, right? We could run that right now and if it's set up to work with SQLite or something, that might be cool. But we need other things, we need Redis. Preferably we'd like to use the same database that we're going to use when we go into production. You might be using Postgres specific data types if you like to live on the edge. So let's look at the compose file real quick. So this is our Docker compose.yaml. Sure you guys all know YAML. My boss and I always joke that we're professional YAML developers because you spend so much time doing it these days. So at the top here, we've got our version. Three is the current version. One to two had a lot of breaking changes. Two to three, not so much. So everything we do here today should be able to apply to three as well. Three just has some improvements to it. And then we've got our services key up here. So as you can see, we're going to define different services underneath of that key. And so as an example, here's our nginx service and there's our Postgres service, simple. So now let's go ahead and look at the types of services because we're looking at two different types of services here. The first one is an image-based service. And we can tell that because we're giving it the name of an image. And so what we're telling it is that our database container, aka DB should pull the Postgres image from Docker Hub. Please note that I did not tag the version on there. But if you look in the code repository, I added the tag. Please don't ever do that when you're developing. You need the same version or you could break yourself later. And then we've got a build-based service down here. So this means that we want to build this service on the fly when Docker Compose spins it up. And you can tell that that's the case because we're giving it a build directive. That little dot right there means our current working directory. And that's the directory that the Docker file is in. And then we're giving it the command to run because don't forget our image did not specify any command. Our image is just a placeholder and it's got our code and our requirements but it's not actually gonna do anything. We need to tell it what to do. So now let's look at a single service. Let's see what these keys mean. By the way, ignore the comments in here. I took the screenshots as I was working on the Compose file. So these are actually all uncommented in the source code. So first, we've got an M underscore file argument. This is gonna load environment variables from one or more files. This is our restart policy. So this is just saying restart it always. I don't care why it exited. I just want my container running whether it's a good exit code, bad exit code. I just don't care. Restart it. This is an inline environment variable which we'll talk about how that works in relation to the M files later. We already talked about building command. This one right here is super important, the volumes because what we're doing here is we're mounting our current working directory of the host into slash source of the container. So remember, in our Docker file, we already copied everything in there but what we're doing right here, if that was uncommented, is we are now overriding what's already in there and we're making it match the current working directory of our host. And what's cool about this is that if you're using some type of server, bless you, that's going to auto restart, all we have to do is go make our code changes and everything's cool. We don't need to do anything else and we immediately see what just happened. Depends on, gets a little weird. So this declares service dependencies. It basically just says that the API shouldn't do anything unless the DB already exists. That means it's going to wait for the container to start before it starts up and it also affects the way that management commands work. So when we want to take certain actions on the API, Docker compose is going to be smart enough to say I know it already needs a database so I'm going to do crazy stuff with that as well which we'll get into, yes sir? No, good question. Each one of these containers is a separate entity. And what we're doing here is we are creating a dependency link between the API container and the database container, right? And lastly we're going to bind some ports. We exposed port 8000 in the Docker file, that just exposes it inside of the container. But in this case we'd like to be able to go to local host 8000 and see it on our laptop. So this is taking care of that port binding. So let's start looking at some commands. If you're familiar with the Docker CLI, a lot of the Docker compose commands mirror the Docker CLI. This is one of those situations. So Docker compose PS just lists all the running containers and as you can see from the output in this circumstance we have no containers, probably not what we want. Docker compose up. This does almost all of the magic of Docker compose. So you can see the command that I ran is Docker compose up dash D. That dash D means demonize. I didn't put that. It would start all of the containers running right away in my terminal screen and if I ever wanted to get out of it and I control seed it's gonna kill everything. So I'm telling it I want it to run in the background. After that you can see that it created some networks. This is what's going to allow our containers to talk to each other and then you can see it went ahead and created all of the different containers in our Docker compose.yemo. And so if we do a PS again we have things and that's cool it's exactly what we want is what we get paid to do, right? So you can see here it shows us the name of each container. It shows us the command running in the container, the state and the ports. And if you look at the ports you can see that some of them have bindings and some of them don't. For example that API container up at the top does not have any bindings to the host. In other words port 8000 is not accessible anywhere outside of the container network infrastructure whereas the second container, the Postgres container port 5432 has been forwarded to 5432. That means that if we wanted to use something like a GUI application, if any of you guys use those things to connect to the database we can do that directly without any kind of magic. exec, this is how we run commands on a running container. So you'll see the command that I ran here it's Docker compose exec and then API. API don't forget is the name of the service that we defined in the Docker compose.yaml file. So we're telling it which service we want to run on and then we're running a Python managed.py create super user. This is a Django application. If you're not familiar with Django this is how you make your admin user. So cool, we just ran that on the running container and it worked. And if you are used to Docker exec which this mirrors this handles the TTY and things like that that get a little bit funky when you use Docker exec manually. You can see that I didn't have to throw any special flags to get that TTY so I could interact. Docker compose run is similar but it does not do anything to a running container. It actually creates a new container based on the same image and it runs the command in there. So in this case I did Python managed.py shell to open the Django shell. And we can see if we do a PS it created a new container for us. So that's pretty cool. Shell is not the best example of why to use that but what I like to use it for is running unit tests. Usually your unit tests will need to install dependencies that you don't want in your production system. So I can just do a run and I can install the test dependencies and run my tests and get on with my life. And that's cool. So slides are cool, terminal's better. I'm gonna do the thing that you should never ever do and I'm gonna climb sideways up the building like Batman and Robin because I wanna prove to you guys this actually works. And if this goes wrong I'm walking out of here. Can you guys see this okay? Cool, cool. All right, I got nothing running. I've got no containers up my sleeves. What's doing up dash D? Cool. Doing just like it did in the slides that I showed you. So went ahead and created the networks, creating the containers. Let's see if they're running. Cool. Yep, they're all running. And let's just look at the logs of the API service real quick to make sure that it did what we expect and it's actually up and running. Good deal. Started the development server. So let's look at our app real quick. We're not gonna, again we're not gonna talk about Django but this is a Django view. And it's very simple. We are making a widget API. It's gonna make billions of dollars. And all it does is when we create a widget it uses the async call from Django queue to create a background task, which is just a dummy task. It doesn't do anything. And then we're gonna print out the ID of the task and then we're going to create the widget. Cool. So, did this actually work? Don't forget, I have bound port 80 of my engine X container to port 80 of my host machine because we wanna use engine X reverse proxy just like we're gonna use it out in the wild, right? That's why I'm not going to port 8000 here. So I'm in debug mode. So I can see that it's up and running. Let me try to log in. Okay, so He-Man's not there, right? So let's go ahead and let He-Man have access. I'm not sure if I can trust him or not but we're gonna have a go at it. Ha ha ha ha ha ha. You'd have to make it all the way to Castle Grayskull to find out. Cool, sweet. So this is real and we're in. So let's go look at our widget endpoint. Okay, looking good. Make a test widget. Gotta be active, of course. Post it up. Killer, so that means that we know that the connection to the database is working. We're able to save that. We've got the actual object. And then let's just make sure that it did what we asked it to do. Yes, it did. It created the task, printed out the task ID. Did Django queue do what it was supposed to do? Yes, it did. It processed the task. So we're basically geniuses and we've already finished our entire project. Pretty impressive. Sure. So it's actually gonna match everything I got in here. Yes, that's actually a great example. So what happened there is that my Django queue container came up before the database container. Now remember we were talking about the dependencies? Docker Compose did the right thing. It waited for the database container to exist before it started my Django queue container. Did it wait for Postgres to actually be ready for connections? No. And so I'm not gonna go into the methods around that because I actually don't like them. They bother me. It's basically like making a little wrapper script that tries to connect to the port. If you go into the source code repository, I put some comments in there for you guys that has a link to the actual documentation from Docker on how to handle that. But I'm not super satisfied with any of the options so we're not gonna go down that road. But that's a good spot and a great point. Cool. So I survived the first live demo. And now we did a Docker Compose up and we're expert hackers game over. You guys can love good old J crown because you get an extra 15 minutes to go hang out with your buddies. We have nothing else to talk about. Well, not really. We need to know how this is working. I just did that and it was all magic and that's cool, but we need to understand what's going on. So let's follow the course of the request. So we're gonna start off, here's our engine X configuration. Don't worry if it looks cryptic to you if you don't deal with engine X too often because there's only a couple of things we care about in here. We're setting a variable with the value API. Again, it was the name of our service in the Docker Compose file. And then we're using that for our proxy pass. So where did API come from? It's a magical host name that Docker Compose gave to us. It waived its magic wand and now any of the containers can use the host name API and that's it to connect to the API container. Cool, so that's how engine X is getting to our API. How is our API running? We have an entry point script and what it's doing is it's making our static files, it's running our database migrations and it's running the development server. This is probably not what you wanna do if you were actually gonna use this for production which we'll talk about later but for our development needs it's really cool because anytime we spin it up it's gonna handle everything that we would normally do by hand. And then we've got the development server running, our code is linked, the development server auto restarts. So at this point it runs a script and we know that we can just dev away. How does the API get to the database? We're using environment variables here. As you can see it's nice to let people still use SQLite if they're afraid of Docker. Don't forget that. And you'll see here in the host DB, there we go. So Docker Compose has waived its magic wand yet again and our API can connect to the database. Super cool. So that's all well and good but things do get fishy sometimes and we need to talk about tips and tricks and caveats. So Docker Compose has commands for starting, stopping, restarting containers. It's important to understand that when you start, stop or restart you still have the exact same container. Nothing has changed. It's the same action figure, right? It's like putting your action figure away and pulling it back out. Nothing has changed. Up on the other hand, there's a variety of strange things. So if anything has changed, it's going to recreate the container whether you want it to or not. And when I say changed, I mean anything in the Docker Compose file. Let's say you added a volume, removed a volume, port binding, anything like that, environment variables. Or let's say that you have a newer image that's been built. It's going to detect that. It's gonna recreate the container. What does that mean? It's gonna kill the action figure that you had and give you a brand new action figure. If you painted that action figure, your customization is gone and you have to get the paint set out again. It's a bummer. You'll notice in the first command that I do there, I say Docker ComposeUp-D API. And I'm telling it there, don't bring up the whole stack, just bring up my API box, right? But why did it look at the database? It says that my DB underscore one is up to date. It's not really what I wanted, is it? It's because I declared those dependencies, right? So when I told it to bring the API box up, it wants to go look at what's going on with the database. If the database container didn't exist, it would create it. And again, if anything has changed, it would recreate our database container, which is maybe not what we want. So we can avoid doing that with dash dash no-depth, right? So if I know that I've changed my API container and I don't want anything else to happen, then use dash dash no-depth to bring it up and I don't have to worry about something terrible happening to my database container that I vimmed into and did something awful to with my customizations. I want them. I can force a build with dash dash build. What this means? Let's say that I changed something in my Docker file, right? If my Docker compose up dash d, Docker compose doesn't care if my Docker file changed. So I'm gonna do something if the image has changed, but I could throw a dash dash build flag and force it to build as well. So that's cool. And I can also force a recreate with dash dash force recreate. So keep in mind, when we be recreated, if something has changed, but let's say one of my buddies wanted to pull a little prank on me and he went inside my container without my permission and did something awful to my config file. Well, I wanna go ahead and dash dash force recreate, get a brand new container, throw away the old action figure, get a new one. It gets expensive with action figures, but it's cheap with Docker. Building does what it sounds like, right? It rebuilds the service, but it's important to understand that it rebuilds it and it uses cache layers if they're available. So one of the features of Docker is that it caches each command in the Docker file that's cached in a layer, and that allows us to build more quickly, but it's not always what you want. Sometimes the cache system is not doing what you expect it to do. You can do a dash dash no cache here. If you know that you want a super fresh build, which is often what you want. It will not pull your base image if it already exists. Remember we're pulling from Python 3.6 slim? Well once we pull it and it's on our machine, Docker doesn't care if it's changed upstream. We have to tell it that we wanna pull the new image, right? So we can do that with dash dash pull. And pull does exactly what you'd expect. It pulls an image, right? Which again, if you're not used to Docker, pulling an image means that it's going up to Docker Hub and it's pulling the most recent version of that image and tag. Cool, so now we're getting into some tricks, right? So you'll notice our API service and our Django queue service, they actually all use the same Docker file, right? Part of the philosophy of Docker is we don't wanna have more than one process running, parent process running in each container. So we've got one to run the API, one to run the Django queue daemon. But what we're doing here is we're giving it a build and an image name that's weird because you remember at the beginning, I told you that there was an image-based service and a build-based service and now we're combining them. What on earth does that mean? It means that when it builds the image, it's going to tag it with the value of what we put in image. So what's gonna happen in this example, it's gonna go through and build the API and it's gonna tag it with the name my dash API. So when it gets down to the Django queue service, it already knows that we have that image and it doesn't try to rebuild it. It's gonna save us a lot of time because building can be annoyingly slow when you're installing a lot of requirements, things like that. So that's a nice little trick to get us moving faster. Networks are important to understand. These are all things, by the way, that you can handle on your own with Docker, but Compose makes it a little bit easier for you. This example is from the official documentation. You can see that we have a proxy that's inside of a front-end network, an app that's in a front-end and back-end network, and a database which is in a back-end network. I think the intent here is pretty obvious. Our proxy container cannot get to our database container. If it tries to use the host name DB, it's gonna get no results, right? This is where it gets real weird, scaling and load balancing. First of all, why on earth would you scale in a situation like this, right? Where your containers have no constraints on their resource usage and they're all running on the same machine anyway. Why do I want two API containers? Don't worry about it, I just want it. So in this case, we're using Docker Compose up again. We're using dash-no-depths, which I accidentally omitted a command and that meant nothing in this case, so ignore that. And then I'm throwing dash-scale. And then the argument you give the scale is the name of the service equals the number of things you want. So in this case, I told it I want two APIs, right? And it gave them to me. Now I have two running. Docker Compose PS has proven it. And what does that do? So what does that mean, right? Before we had one engine X container, one API container. When engine X queries for API, it's very simple. It's going to the container. What does it mean now that we have two containers? It means that you just got round robin DNS load balancing. This is me inside of my engine X container doing digs for the API. And you can see that it's rotating the IP address that I give back. Again, it's not important in terms of performance or scalability, but it's going to come into play in a second. Lastly, application configuration. This is more of a Docker problem than a Docker Compose problem. You're used to using files for your app configuration. It's not really the way you should be doing it in Docker because the idea is that you have an image and you can use it anywhere you want for dev, prod, whatever. So we want to use environment variables. It's what we're doing in our settings.py file to connect to the database. So here's our .m file. Got a environment variable with the name test and the value test. And we specified in the Docker Compose file. And so when we bring it up, we get it. Cool, I just proved it. We can also do inline environment variables, which we saw. You can provide the key and the value or you can just provide the key. And the value will be inferred from your shell. These variables take precedence over the .m file. So that's important to know. So you can use these inline values to override the default .m file. So you think that this is so cool. You had this brilliant idea. You're gonna spin up a micro EC2 instance and just like deploy your whole stack on there, right? You can save the company some money and everything's cool. We're gonna move on with life. Should you be running this in production? Probably not, to be honest with you. There's way more mature solutions. Docker Compose is not designed for production. It's designed for development. And at this point, we're pretty happy with it for development because we can iterate quickly and we can distribute it to our team members and it's easy for them to run it locally. But there are situations where it makes sense to run it in production. Let's say if you're in an instance where you don't have network accessibility aside from inside of a local network and maybe you just wanna run the full stack on a single machine. There's really nothing wrong with that, to be totally honest. And it could actually work out well because you could have multiple machines with the same Docker Compose setup. And then if something terrible happened to one machine, you could fail over to the next one, right? So again, probably shouldn't do this but if you wanna do it, here's the deal. You need to get rid of your volumes that don't require persistence, your code, right? You don't want anybody going in there and messing with a file and it's gonna instantly affect the running container. We want the code inside the container to be pure what it always is and it should not be able to be changed from outside. You need to use the networks. You need to get rid of extraneous port bindings. Remember how we bound our database? We don't want anybody connecting to our database. Remove the binding because the API can still get to that, right? But nobody outside can get to it. Use production environment variables like turn off debug mode in Django, please. Use a real server. Run server is not really made for productions. You wanna use, you know, USIGI, your gun and corner, something like that. And if possible, you wanna build the image and then pull it down instead of building it on the host. Theoretically, building on any machine should come out the same but in my experience, I've seen that that's not the case sometimes. So that's fine for when you're sharing it in development but once you're in production, you don't wanna take that risk. And then the last thing that we need to talk about because I'm running out of time is what it looks like when you release, right? So let's say that you change the code, you wanna update it, that's great. And so you go to rebuild it. What happens? Your old container dies. Your new container comes up. You just made downtime. Congratulations. So here's what we need to do for a zero downtime deployment and I'll leave you guys with this. We're gonna build or pull. We get our new image. We're gonna scale. See, I told you scaling meant something. We're gonna find the old container using Docker PS. We're gonna stop that container. We're gonna remove the old container and then we're gonna scale back down. And what happens when you do this? Magically, when you stop the old container that round robin load balancing that we were talking about IngenX now only sees the IP address of the new container that you spun up So you had a short period of time when you scaled where IngenX saw both. You killed the old one. Now it only sees the new one and you can move on with life. I was gonna live on the edge and live demo that to prove it to you guys but I'm out of time. So this is what I'm saying. You should probably be using one of these if you wanna do a Dockerized deployment instead of Docker compose beyond the scope of our discussion today and there's much better talks on those options. So now that you guys are experts, go forth and break things. Thank you, thank you. So I think, do I have five minutes for Q and A or how does that work, guys? I think I read that in the theme and five minutes, set them in five minutes Q and A. Am I being too retentive about this? Cool, questions? Oh, sorry. You're a star now, dude. Bring in the mic, let's check out his mic skills. Do you have any tips for, if you've got your test instance and you wanna go and you shouldn't be doing this but deploy into production to sort of add the production environment variables? Sure, sure, yeah, yeah, I get what I'm saying. Because you don't wanna have two competing YAML files and you gotta make it back in there. It gets ugly. There's not a lot of great solutions. One that I actually like to use is Ansible. So I make an Ansible playbook and I use Ansible Vault to encrypt all of my secrets and then I separate them into different group vars, right? So inside of group vars, I have Dev and prod and so then when I wanna switch between environments, it's as simple as running Ansible so that adjust the M file and then Docker compose up dash D. Like there's more sophisticated ways to do it. Like ideally you would have something like Puppet or Ansible is not perfect for it but in a situation where you just need to do it on like a local machine that doesn't need to reach out to a master node, it kind of works. The other option is that people use different Docker compose files. I didn't talk about it but you can inherit from other Docker compose files. So we could have one for Dev and one for prod and they pull from different environment variable files. That make sense? Killer, anybody else? Come on man. I know I didn't explain it that well. All right, cool, cool, cool, no worries. Well I'll be around if any other questions pop up for you guys. Just come find me. I'll be around the conference. After the conference is over, you can find me at the nearest brewery and I'm happy to answer questions at any time. So yeah, come and find me and thank you guys so much for your time today. Enjoy the rest.