 All right, still getting feedback, that is annoying. Well, all right, let's get started. So just in case you don't know which room you're in, this is Ride the Whale, this is Docker for Drupalus. The first question you might ask is, who is this weird person in front of you? Why do they have a stuffed whale in their hand? And do they really get that excited when they have a boarding pass for a business trip? Well, I'm Tess Flynn, otherwise known as Socket Winch. That's Winch, not Wrench. I am the module maintainer for Flag, for Flag Friend, and for examples. Currently, I am a freelancer with a company called 10-7, and I'm still looking for work, I'm looking for a full-time. I like doing DevOps, advocacy, evangelism. Who wants a small t-shirt? You raise your hand first. I'm also like throwing things at people. So, what's wrong with MAMP? For years, I've had a hate on about MAMP. Why do I hate MAMP so much? Well, there's a few different reasons that I've all came down to. Number one, there's no socket, there's no sandboxing in MAMP. So you might have a situation like this, where you're working on your laptop and things are fine and you're just like, you know what, I should go and do some cleaning around my laptop and figure out what's going on. You're working away and you're going, wait, what the heck is this? How long has this been here? How long has any of this stuff been in here? It's been in there for like five years. Five years, I could have had my 15 gigabytes of disk space back. I haven't used that project forever. Well, that happens a lot with MAMP. It's really kind of like the inverse law of Vegas, whereas things that happen outside the repo stay on your system forever. Because it's like pulling weeds, there's files, there's databases, there's configs, there's old binaries. And it takes a lot of time and discipline to find them and remove them. And quite frankly, as developers, we're lazy. We're good at automated laziness. And cleaning stuff is not fun. It's not easy to automate. You can't just grep for everything, because there are little tiny bits of configuration. There are different file types are split across different files. It's really complicated and it's really a mess to try to figure out. Fortunately, Docker Sandbox is absolutely everything that you do. There are standard commands for deleting everything that you do, no matter what project or technical stack you're working with. Problem number two, the dead environment that you create is not repeatable. So you have this wonderful thing you finally, after years of begging your boss, look, I can't use this laptop anymore. It's ancient. It's got four gigabytes of memory. It runs a core duo. What are you asking me to do here? But you finally get your new laptop and you're all excited and happy, and then you've got to recreate all your sites. Oh, God, that's going to take all day. And the thing is that instructions aren't enough. A lot of people will say, whoa, you just need to follow the instructions to set up this project. Well, the problem is that it's not enough. Him beings are fallible. They're forgettable and often, like myself, they're fried. They make mistakes. They don't want to actually have to go through lists of instructions, plus the instructions themselves are open to interpretation, where you might think that one instruction means do this, this, and this. Another person will say, well, what about this? Oh, you mean this, right? That happens a lot. And it burns time and energy unnecessarily. We don't want any special snowflakes. Docker has no special snowflakes by design. Containers are repeatable. They're meant to be stamped out again and again and again. And you only need a few text files to describe what a container is. Problem number three, the dev environment isn't shareable. So you might have a situation like this where you have yourself and you have a new developer who's coming into the project. And they ask you kind of impatiently and a little bit nervously, you know, seem like that person of the group and they're saying, well, I'm having a little trouble setting up the site. And of course you, you're the experienced dev in the project and you're like, oh, sure, well, you just got to blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. And the other person is sitting there like they need to use the bathroom really badly because they have no idea what you're talking about. You can't share a house of cards. That's what bespoke environments are. Every time we build a dev environment, it is a house of cards and we can't share it. This adds a lot of hidden costs too. On-boarding, troubleshooting, and recovery time. Have you ever been on a project with five or six developers and there's always that one guy, Steve, who has this one problem with the environment and it's only them and only on their laptop and only at Tuesday at 2.59 after they've had their third coffee. We don't know why it's like that, but it's repeatable for them but not anybody else. Why do we have that problem? And it could take months to figure out that it's because they set up a cron job eight months ago for an unrelated project. That happens in real life. You might have heard of the dry principle. Do not repeat yourself. It should apply to servers too. We shouldn't have to repeat ourselves when we build a development stack for our projects. Docker is shareable. Everything you can build from Docker is downloaded from the Internet and you can add the build files to your repository. They're just text files. They're not huge blobs. They're not huge disk images. They're not ISOs. They're not any of that. They're just a few text files that are a few K in size. So how do you get Docker? All right. Do I need to pay for it? No. The Docker core engine is free and open source. You can find out the source code on GitHub. For Mac and Windows users, though, you'll probably want to go and get Docker from their website on Docker.com slash products slash Docker. And you want to get at least Docker 1.12. That's going to have the newest features and the easiest version of Docker to use on these platforms. If you do have an older system that can't use this, I forget what exactly the specifics are like if you're not using Windows Pro or if you're using what was it? macOS 10.5 or older or something like that. There's another product called Docker Toolbox which provides an older technical stack which will still run Docker. Now, this older technical stack includes virtual box. Wait. You're talking about containers all this time and now you're talking about VMs? Why do I need a VM? So here's the thing. Linux and Docker work hand in hand. Linux provides the sandboxing and runtime in order to run all of the containers and all of the applications inside containers. Docker ties everything together for you with infrastructure and also common utilities in order to make things easy. So you have your technical stack. You have your Apache or MySQL and you have your PHP. They all are running as Linux applications on top of a Linux kernel. The Docker engine mediates the interactions between that Linux kernel and those applications. Each application can actually be running in its own container. In order to control all of this, you have a series of Docker utilities which interact with the engine and the kernel. And this looks great. But if you're on Mac or Windows, you have a problem because you've got two different operating systems and they're like toddlers fighting over the same toy. It's like, no, I'm VLS. No, I'm VLS. And now they're fighting each other. So in order to actually make this work, in Docker, we actually use a hypervisor, a VM. It's one hypervisor for the entire system no matter how many containers you're running on it. That's the key difference between Vagrant and Docker. One VM for everything. It is incredibly lightweight VM. It is only a few hundred megabytes. And it runs everything. In 1.12 of Docker, the hypervisor is actually embedded on Windows. It's going to use Hyper-V on Windows Pro on Mac OS. It uses an open source project called HyperKit. For older versions of Docker, it will rely on VirtualBox. How many people are using Linux in this run besides me? Oh, that is a lot more people than I thought. Wow. Okay. So there's no virtual machine necessary on Linux because it's the native environment. And it just uses your host's normal Linux kernel. Use your distro standard package manager tools to install Docker. All right. We've got Docker in our system. How do we get some containers? Docker is actually a lot like Vagrant. You rarely build containers yourself. What happens instead is that you actually go shopping for containers on Docker Hub. It's an online repository of ready-to-use containers at hub.docker.com. So how it works is you might have a community developer like myself or you or anyone else who comes up with this great new container and they want to share it with everybody. So they take the container source code and they upload that to Docker Hub. Docker Hub will take that source code and build a real actual working container on their servers. And then they take a snapshot of that called an image. When you actually need to use that container, you do a Docker pull from your system using the terminal and it downloads the container to your system, ready to use. So it does all of that work for you. There are a number of different official containers that you can actually use. These are typically created by project maintainers. There are containers for Apache, for PHP, for MySQL, MariaDB, tons and tons of Linux distros. There's tons of them out there. There's also an official Drupal container at hub.docker.com. slash underscore slash Drupal. That URL always confuses me for some odd reason. And it's great for demos, but it's really a bad container to do actual work in. It's meant to demonstrate Drupal. That's its purpose. And this is one thing that you can actually see with containers. That's very different from vagrant, where you actually meant to use this as a self-contained demonstration system instead of using it for a base in order to build other stuff. Other people can very easily go, I'm going to go and build another one that will be meant for production environments or meant for my development environment. So there's a whole bunch of different containers out there, and it's important to keep in mind what use you want to look for. How do you download a container once you've found it from the terminal? You do a Docker pull. Looks very familiar to a Git command, and you pass that image name that you find on Docker Hub. And usually it's going to be some person's username slash the container name. For official containers, it's just going to be the project name. So let's start with Debian. Debian runs the vast majority of web servers out there. The other competitor is CentOS. There is a Docker container for CentOS as well. But if we're going to use Debian, let's do a Docker pull Debian, and this is a good base for our web projects. You might notice that we have Debian, and some of these containers have this weird other format where they have an image name and then a colon and then another name after that. This is called a tag. It's usually used for Docker containers to add versioning. So you can have a Drupal colon 7 container or Drupal colon 8 container. There's also been used for variants. So you might have a CLI container that also has, say, XD bug installed for some odd reason. And you can make a tag that has colon XD bug, and that works perfectly fine. The tags are optional too. You don't have to use them. There's a default tag that's just called latest, which will cover the most default and most up-to-date version of that container. Also, it's important to note that the tags are unique per container. So check the page for the container on Docker Hub. All right, so we did a Docker pull on Debian, but we still don't have a running container yet. So how do we get that running? Pretty simple. Docker run and then the image name. So here's what that command will look like. We have Docker run. Then we pass it dash i because we want to run the container interactively. Dash t because we want to emulate a normal terminal session as if we were SSHing or telnetting into that container. The image name to start and the command to run inside of the container. This is what that looks like. You'll notice immediately you're dropped into a root terminal and it's got this really long string of numbers after that, which is unique per container. And I'll tell you about what that is later. So this was done on a Mac machine, but you'll notice actually this isn't a Mac anymore. It's running Linux, some variant of Linux. So now we know, for example, we're outside of macOS, and now we're in the Linux world. All right. To exit out of this terminal session, we have a command like you would in SSH or telnet. What if you want to run a container in the background? This is a little bit more complicated when you actually want to do this in one huge command. How it works is you do a Docker run dash d for detach. The image name, the command. And then in this case, we have to use a special special bash command that tells that container to always keep running, but do nothing. That's a bit of a hack. We'll show you a better way of doing that in a second. And we get this really long string of numbers. That starts with B4D. That's called a container ID. It's a unique identifier for every container instance in the world. When you do one Docker pull and one Docker run, you'll get one container ID. If you do another Docker run, you'll get a different container ID. It's unique for that one entire instance. You don't have to use the entire huge string of numbers either. You can't actually just use the first few characters as a shorthand, as long as it is unique within your system. How do you list the containers on a system? Docker PS. Think process, like the Udix process command. Now let's say you want to task into a particular detached container that you ran in the background. You do a Docker exec. You pass the container ID and the command to run. So, if we wanted to task into that B4D container that we created earlier, Docker exec, I-T, B4D, and the command. And we're back inside that container that was running in the background. Again, use exit to quit. How do you stop a container once it's running in the background? You pass the command and you pass the container ID. That's all great tasks, but there's a problem. That's a lot of commands. I don't want to remember all those commands. I don't want to have to set up shell scripts to do all those commands. I want somebody else to do the work for me. Well, there's a wonderful way of doing that. First of all, you have to remember something. A container is just a process. This is a phrase that is often handed around when trying to introduce people to Docker as a project. It's kind of true, but also kind of not true. So when we think about LAMP, a LAMP stack has multiple processes. We have a process for Apache. We have a process sometimes for PHP. We have a process for MySQL. So that's a lot of different processes. Do we have to start multiple containers? Well, the answer is kind of yeah. So how do we do that and not go crazy running 50,000 Docker run commands? There's a wonderful thing called Docker Compose. Compose is what got me into Docker. This is the part that tied everything together so well that I loved it. It allows you to manage multiple containers, what I like calling a container set, with just one text file. That text file is called docker-compose.yaml. It is a descriptive file format, not imperative. And you typically want to save this in the root of your project directory. So let's create a basic LAMP set with just one compose file. We start with version colon 2. This is the not your file version, not the Docker version. It's the docker-compose syntax version. So it's almost always going to be version colon 2. After that, we're going to define a number of containers which the Docker Compose lingo is called services. So we create a web service which runs the PHP container with the Apache variant. Then we run a database or a DB container which runs the MariaDB image from Docker Hub. How do we start this file, this container set up? We do a docker-compose up dash D. This is the working directory when you issue this command. All right. Once we start that up, we'll have multiple containers running. We can list just the containers that are part of our container set by using the docker-compose ps command. And we'll see our two different containers running. So we notice the MariaDB container actually quit. You might notice also that the container names are a little bit weird. Where does this ride the whale underscore DB underscore what? Where did that come from? Well, it works like this. The parent directory name from wherever that compose file is then the service name and then a unique index. So when I actually made this, I did this in home slash test slash project slash ride the whale. There's my compose file. So what it did is it started up the web container and applied a unique index so that it's always unique per system. We need to have more maps and ports. Right now the containers are operating their own little world. They're not connected back to the your system in any way that they know about. So in order to do that, we are going to map some ports. We start with the port on our host. We want to map to the port inside the container. So we update our compose file. We're going to map our Apache container to port 80 both directions. All right, now after we make that change the compose file, how do we get that to reflect in our container set while we need to kill the containers and then restart them. This is something that's very different from what you're used to with vagrant with vagrant. You just we update this thing. Docker containers are meant to be disposable. You're supposed to kill them at any instance at any time for whatever reason you want. So you get used to actually throwing them away whenever you don't need them again. So we do a docker compose kill. Then we do another upd. And then we can actually go to local host on our system because that's what the containers think that they're on. They're on the same IP as our system in a 403. That doesn't look right. Well, actually this is good. This system didn't have Apache installed on it. The fact that we're getting a 403 suggests that now there's an actual Apache process that's running that responded to us with a proper response. So we're getting somewhere. Now we need to get files into our container. How do we do that? Docker uses volumes. Volumes basically creates a persistent, one of two things. It creates a persistent directory inside of a container or you can mount a directory on your host system into the container. For creating a volume in compose, you create a volumes section and then you map the path on the host to the path on the container just like the port mapping system. So our project looks like this so far. We have our compose file and we have a docker directory which has our web root in it. So in our compose file we're going to go under our web container. We add a volume section and we're going to map the doc root. Again, this path is local to wherever that compose file is located. And then we're going to map that into the container. In this case, the default directory for that container is var slash triple w slash html. Now you might notice there's something a little interesting going on. I use two different styles of paths when I actually specified that volume mapping. The first one was on the host system. I use a relative path to the compose file. That's actually the easiest way to do it. Whenever you're mapping a path to inside the container, always use an absolute path that will give you the most consistency whenever mounting a volume inside the container. All right, so we kill the container set, we up it, refresh it and behold, we have our imaging page. It gets in Docker and stuff. Okay. What about that db container? Why did it quit earlier? Maybe it was missing something. Maybe some configuration maybe. How do we pass configuration to a container? There's a few different ways we could probably do that. We could bake it into the container so that we can use it as a standard database name, standard user and a standard password all the time, but that's really constraining and really limiting it. I don't want to have to make a new container on hub if I'm going to change these relatively simple values. We could mount a configuration file in a directory using volumes, but that's complicated if it doesn't have any standard formats. You have to use commands and then pars. It's like that's how Docker containers they'll use this method. They use environment variables. So what happens is we tell the container use this and specify this environment variable on startup. How do you set an environment variable and compose? You're going to put an environment section underneath a service name and then you just use one or more variable and value key value paths. What variables should you be using for whatever container it is? Really, you need to check the container itself. Go to the containers page on Docker hub, find out what it is, find out what variables it is and then use it. So for MariaDB, we actually have several of them. So we actually have one for the MySQL password, the database name, the MySQL user and the MySQL password for that user. And these are all on the content of the Docker hub page for the MariaDB container, all very well documented. So we can specify these values in our compose file. But wait, why is there no port? Let's go back again. We specify port mapping for our web container so that we can get to the Apache instance from our host. But this DB container is for all intents and purposes a complete unique server separated from the web server. So how do these two actually communicate? Well, actually, we don't need to worry about that. Docker compose knows that you're making a container set, knows that these are related sets of containers. So what it does is it automatically creates a private network on your system that connects all of those containers together. The containers themselves come preconfigured with the right port open. So you don't need to worry about the port 3.3.0.6 in order to get MySQL to communicate to everything else. If you do want to communicate to the database container, you can of course map the port. There's no problem with that if you want to use utilities on your host system in order to communicate with it. So if we wanted to do that, we can re-up our container set and then we can use on our host system the MySQL command. We pass at the username that we defined in the compose file. We say that we're going to give it a password. We tell the host that it's the same as our local host, 127.0.0.1 and then the port 3.3.0.6 we pass the password and we're in. Even though we don't have MyHemorrheaDB set up on this system, it's running inside of a container. Now we can get to Drupal. Finally I haven't got a way to t-shirt yet. I've got a medium and you want a medium. You wanted a medium. So let's prep some files. First you want to download Drupal core to the volume amount to directory. You want to create set the permissions for the files directory and create the services.yaml and settings.php as normal as you would for any normal Linux server. And once you refresh you can go to local host and you get a Drupal install page. Finally we have Drupal working. And then there's some problems. Dammit. So we are missing some extensions. Maybe we could customize this container. We just pulled the default Apache container from Docker Hub. Maybe we can customize it. Put all those extensions on there that we need. So let's talk about customizing containers. There's a broken image there. Anyways, so Docker files have their container source code. Docker files are always called Docker files and unlike the compose file they're imperative not descriptive. So first of all you want to create some directories for your Docker files because they're always named Docker file. You probably should give them more of a concrete directory structure. A common one, which is just a suggestion but a pretty common one is to create a Docker directory and then a directory for each container in your container set. For the containers that you're going to customize you want to create the Docker file inside of that subdirectory. The first two lines of that Docker file we have a from and we have a maintainer line. The maintainer line is pretty obvious it's who works on this Docker file but that from line what? Why do I need a from line? I mean I know I'm going to create a container but in Docker files we always start with another container. Is that true? That's actually true. In Docker it's more like turtles all the way down. All containers in Docker are based on another container that already exists. So you might have a container that runs Drupal then that container is in turn based on an earlier container that runs PHP. That container is based on an earlier container that provides Debian environment. And on and on until you get to the parent of all containers the scratch container. The scratch container is the starting point for all containers in Docker and it's just an empty TAR-GZ file. You might also hear the phrase base image. Base images usually refer to the base Linux install that you're using usually Debian or CentOS or Arch or whatever. But usually a lot of people use the phrase to refer to whatever image you start with. So the PHP Apache image will be our base image for this instance. You might notice another thing that's weird about Docker files is there's no install directive. This is Linux. We're not going to specify how you're going to install stuff. We're going to let the base image decide how that works. So for an install commands on Debian we're going to use apt-get updates. We're going to use a command that uses yum and some containers such as the PHP Apache container use a specialized install script to help you install everything. In order to run these commands we actually have a render active which just specifies the command to run when we build the container. So here's what our Docker file looks like if we want it to run Drupal 8. So we have from PHP Apache and then we're going to install several different libraries for GD then we're going to use the container-specific script in order to install a whole bunch of extra PHP extensions, GDE, DASAN, international, PDO and MB string. So I did all the footwork for you there. How do you use the Docker file that you've created in a compose file? Well, we actually use the build statement. So before we had an image statement which refers to the image name on the file, now we have a build line which refers to the path of the Docker file. In general try to keep this relative to wherever the composer file is located, not absolute. So we updated our compose file. We changed just one line. We noticed we just changed the build line, now it's going to say .docker slash web. Everything else in this file is exactly the same as it was before. And this is one thing that's nifty about extending Docker images. It works like object-oriented programming. You find the container, the server that you want to modify, and then you just add to it. And that's it. You don't have to add a whole bunch of configuration on top of it. You don't have to recreate the entire world. You just add what you need and you're good to go. So if we're going to do a build, we have to simulate what hub does. We have to build those images into containers before we can use them. So we have a Docker compose build command and then we typically can specify the service name to build like our web container, but if we specify none, compose will actually go through, find all of the images, all of the surfaces that actually use build them for us. And you need to do this before running up. So we have a database container that's running and working, configured with a username and password we have. We have Apache, which is now running all the extensions that we need. Now let's add some utilities. We need Drush. So how do we get Drush? Well, there's a container for that. So you can go to hub.docker.com and you can add Drush, Drush, Drush. And now we get that container into our container set, but wait, how do I use my files? Drush needs a connection to the database and it needs a connection to Apache. But each one of these services is his own unique server, right? So how do I get those files on there? Well, we need to start by reusing a volume. Now sure, we could copy and paste the volume section from our web container over to our Drush container in our Compose file. But what if we change that and it gets complicated? No. We could also use a new facility in Compose version 2 called Named Volumes. That's complicated and a little bit more heavy-duty than we need. So we can actually use another thing called Volumes From. Volumes From is really great because what it does is it says for this particular container, use all of the volumes that are specified on that container and you just pass one or more container names and it will just grab those configurations for you no matter what they are and mounts them exactly as they would be into that container. So we're going to mount our doc root along the same exact path into our Drush container. When we re-up our container set though, you might notice something weird. Why did the Drush container just stop? MySQL containers working are Apache containers working but the Drush container just stopped? What? So here's the thing. Containers actually can take two different forms, kind of like Linux processes. Containers can either be a background persistent process like Apache like MySQL but there's also several task specific containers. You run it once and it stops. And that's what that Drush container is. It's set up as a task specific container. It executes the Drush command and then it quits. So you have to run it interactively. So how do we run this Drush container that we added to our compose file? Well, what we do is we do a Docker and compose run. We specify the container name in our compose file which is just Drush and then we specify the rest of the Drush command. We're creating a new container instance each time but those are only a few bytes in size. So remember, container is just a process and it stays up just as long as that process does. So if that Drush command does its work and then stops so does the container. So let's say we want to do a Drush assai using our Drush container. So we do docker compose run container name Drush command and we pass it all the different parameters and then boom, we get troupel installed in our container. Awesome. Alright. I've got a large. What's a large? My aim was a little off that time. I do also have some 10, 7 stickers and also these lovely iron-on patches that you can also get right up here on the swag table. So let's say you have an existing site. Not a new site. We have an existing site that we want to put onto Docker. What do we do? First of all, you want to avoid database dumps. I hate database dumps because database really please do yourself a favor don't use them because dv dumps are like tribbles. Sure, you get one database dump and it's only a few megabytes and it's not a problem but pretty soon that one database dump is now five different database dumps they're all in your git repository and now your git repository is two gigabytes in size. dv dumps are like tribbles. They're small but they quickly become a very, very big problem. Don't use them. Instead, what you might want to do instead is you want to put some content in your code. You want to work with your client to create representative content and then use something like uuid features in order to put that into code in your repository. That way the site can always be re-initialized without a database to create new content that is useful for testing with such, you know, with things like bhat. But not everyone can do that. Okay, fine. I have a database dump. What are you going to do? There's a few different ways that you can get an existing database into a container. You can use on your client on your host to load it. We've shown that earlier already. And this is great for smaller databases. If your database dump is somewhere between 1 and 50 megabytes, you're generally okay. And it looks like this. You can do a docker-compose exec. You run the MySQL command inside the container, past your username, your password, the database name, and the path to the database dump. It's just like you would run MySQL on your host system except it's got docker-compose in front of it. But let's say you've got a multi-gigabyte DB. One project that I'm working on has a 7 gigabyte database that you have to load every time. You want to be very careful with these. What you want to do first of all is do a lot of things you would normally do for a production MySQL instance. You want to increase max allowed packets so that you can get that dump into the system without it dying on you, getting that horrible server has gone away error. But another alternative is to actually import the database dump from inside the container. Now I'm not going to go into how this works in detail, but generally what you do is you create a volume. Usually you create like a DB dump directory in your project. You put your database dump in there. You get ignore the database dump please for the love of the universe. Then after you do that, you go into your compose file. Go onto your database service. You add a volume to mount that into the database container. Then after that you can use the docker exec command with bash to get an interactive session. Then after that you can just load the database as if you were on a MySQL server instance. What about cleaning up? We have our project done. It's pushed. It's out into the world. Everything's great. How do we clean up? Well, you can delete your container set by using the docker compose kill and then the docker compose remove commands and that will delete and stop all the containers and then also delete them from your system. But one thing that I actually discovered only a few months ago is that wait I deleted all my containers but why is my disk still full? Well volumes are not deleted by default in docker. You have to explicitly delete them. That's actually meant as a data saving feature to prevent you from completely deleting all your production data. So what you want to do is you have to explicitly delete them. How do you do that? You specify docker compose remove dash v when deleting the container set. Dash v means yes delete the volumes also. Alright, is there anything else? Well, we also have all those images that we downloaded from docker hub. There's tons of them. They're the snapshot of container that we downloaded from docker hub or any images that we created using docker build or docker compose build. So what do we do? We can list those images using the docker images command and we can actually see how big each one of these are. When we do that docker images command it actually gives us the size of those particular servers. You'll notice this is a lot smaller than the average vagrant box. How do we delete images? Well, we can do a docker remove i, rmi remove image and then we can pass it all of the images in a list using docker images dash q for just limit list the image IDs. Let's say we need to just clean everything out of our system. We want to delete all of the containers all of the images, all of the volumes everything. How do we nuke it from orbit? Well, we use these three lines we kill all the containers passing it a sub command which will list all the containers for us. We remove all the containers and their volumes listing all of the containers running or not, that's what dash a means and then we remove all their images. If you're running on macOS or windows there's a lot easier way. You can actually go and go into the docker preferences and you can go to reset the factory defaults. This will wipe everything off of your system and provide you with a clean back to factory state. This is one of the nicest features of docker for mac and windows. The ability to wipe everything out with just a few button clicks. Where do we go from here? Well, so far we just have that single task specific drush container. We might want to make a better COI container. One that's always running so that we don't need to create a new container instance every time. One that has other utilities in it like Drupal console, sas, grunt and so forth. And we want to be able to task into that running container so we don't have to go through the creation phase each time. There's lots of different ways that we can do that. We can do it ourselves. First of all of course you can read the documentation at any time at docs.docker.com is recently open source. So anyone can contribute to it on github. You can also read my blog post series which is a lot more building from the ground up. Bottom up versus top down which is what this presentation is about on my website docker.dennanette.com slash tag slash docker dash scratch. You can also use other pre-made containers out there. There's lots of them to choose from. Find one that works that fits your style and your project. I recently released also two different bare bones Drupal containers called Drupal base and Drupal CLI. These are links. There's a link to the presentation at the end of the talk. And you can find these on docker hub. You can also fork it on github and do your own. You might also want to do some Drupal 8 module development. Well I have a project called drop whale which is just a drop in Drupal 8 development environment so you just have your module project. You don't need to download and install Drupal. It does it all for you. It pulls and installs Drupal dev internally and gives you everything that you need to get going. You can find it on github.com slash socket one slash drop whale. If you want to run a production site or if you want to do other client work you can also look at the docker for Drupal project. This is a relatively new one but it's full featured. It's got Redis, it's got Memcache and it's got Solar. It's like a buffet table. It provides you a number of different containers to choose from and then you put them together with the configuration documentation that they provide. You can find that on docker for Drupal.org and you can always build your own. All of these docker, all of these configuration files are just text files. They're not large. You don't need to store a huge ISO or a huge volume dump somewhere unless you have a database then that's your problem. It's good to make per project once. There's nothing wrong with making a per project container set. You'll be able to customize it so that it's always repeatable so that your instructions to anyone that goes on to the project is do you have docker installed? Good. Docker compose up in the project directory. You're good to go because it does it all for you and that's what docker is there for. Also if you come up with a container which is really useful that you want to share with everyone in the entire world you can contribute it back to docker hub. It's free. Anyone can do it. Hub does all the building of your containers so you don't have to do that docker dash compose build step yourself. I want to give a few special thanks here for the Drupal Association for letting me be here after I was laid off a few weeks ago. 10-7 for providing me some freelance work while I'm here for the interim. Also Mark Drummond and Paul Mitchum who are my guinea pigs and encouragers on my docker journey and a whole bunch of other people for why I'm here today. Thank you. You can find this presentation on github.io. Who wants a way out? You want one. And I think I gave away my last shirt so remember there are patches and there are stickers up here. Please don't make me take them back to my freelancing boss and shame. The patches are really nice by the way. They're really really nice. They're iron on, they're so on you can safety pin them on because I'm like that. Are there any questions? Question. So typically what happens when you have multiple projects is that you're going to have a unique container set in each project directory that's stored in the repo. So what happens when you need to switch from working on another is that you do docker-compose kill and then you go to the next directory docker-composeup-d and you're in. And that's it. That's all you have to do. And the idea is that you actually want to shut down and start it up. Unlike Vagrant, starting up a new container is really cheap on the order of seconds no matter what the container size is. That multi gigabyte database container starting up that database container takes seven seconds. That's it. It's really cheap to switch between multiple projects. So you get used to that flow. You don't need to get a proxy server and then route port 80 between 15 different Vagrant boxes. You don't need to do any of that madness anymore. You can just shut down one project and start up the next one. Because you're putting all your data in volumes you don't actually have to worry about re-initializing the database each time. Those are persisted for you as long as that container is not deleted. Question? How do you switch between the development config for a project and the production configuration? That is a really complicated topic. So the question basically is how do you switch between the production version and the development version of a project? Docker and production is a contentious topic is the first thing I have to tell people. Because typically there's going to be a lot of differences in what its security model is. There's still a big concern that you have to run Docker as root right now, which is still a problem. They're working on that that's supposed to be fixed in an upcoming version. But there are still a lot of reasons why people don't run it in production. But you can use a different compose file name. You can actually specify the compose file name and the compose commands and use an alternate one. You can specify you can use a hyperlink during a you can use a sim link during a CI process to switch from one compose file to the next compose file depending on environment variables. So you can have two different configurations. There's also a lot of different utilities for production for Docker, such as Docker swarm, which is now baked in in 1.12, which will include that includes a load balancer and the ability to work with multiple separate hosts that can work in tandem to manage Docker containers. The jet lag is really getting to me right now. Question. So was there any advantage to running something like Drush as a separate container as opposed to just including it with your own base image that you're making? The question basically was is there any advantage to running Drush as a separate container versus in your base in a CLI container? The answer is basically laziness. If you have a lot of different tools in your CLI container, managing those and building them into the container can be a bit annoying after a while because you have to build that container locally unless if you contribute it back to Docker hub. So that is a problem. But typically it's usually just laziness because it requires a lot of time to make the compose file and make the Docker file and find the commands and so on. Question. So here's the thing. I actually don't use anything special like that anymore because with most recent versions of Docker it doesn't matter. Instead what I usually do is I have a volumes directory and the volumes directory is live mounted no matter what the underlying Docker volume driver is. So as long as you have that volume mounted if you make a change in a file and the container is still running it will be reflected in that container in a few seconds. Virtual box had problems with that. Yeah, the older virtual box space ones had problems with that. There are a lot of hacks and work arounds to get better performance there but in most recent versions of Docker you don't need to worry about that so much. Question. So I don't know. So the question is about Docker swarm and what do you do with volumes and the answer is I don't know. I'd have to research that myself. Question. I need to get to this side of the room because I don't have versus Docker swarm. I mean that's really a decision as to which you want because Docker's been so popular a lot of these tools are baking in Docker as an underlying target so it just depends on what you like and what you're used to. Okay, is there any question? You have a question. We use Docker machine and it's been fine but we've had quite a lot of issues just in fact compared to the Linux environment it doesn't behave quite the same. But we've tried switching to the native Docker but we've had quite a lot of stability issues and we've had two developers with the same kind of problems with it. I don't know if you have any insights on where we're here and how it's going. So the question is about stability versus the older versions of Docker that use Docker machine the virtual box based Docker and also the newer 1.12 which has the embedded hypervisor. I've had a few issues with stability on the newer versions mostly entire container sets just stopping when I ask them to do too much. That's primarily a VM tuning issue and the problem is because it's embedded in 1.12 it's a little bit harder to actually tune that the defined paths for that are a little bit more buried and this is one of the reasons why it's not entirely production ready yet in my opinion because it's not as clear of a path and you might have to actually work between Docker machine and the newest embedded versions of Docker until the newer versions can pick up and resolve those issues. Right. Yeah. Yeah, so even on the newer versions of Docker there's still an embedded Linux VM in there. There has been talk about doing a native windows or a native well not really so much with a native Mac because Apple doesn't want to do that but a native windows environment side of a Docker container there's been discussion about that but it hasn't actually happened but Microsoft is pouring a lot of money into Docker so it might happen someday. Not particularly most of my experiences with Docker has been just setting up development environments and not running them in production because I don't think that it's entirely ready for that. Okay. I think I'll need to talk to you more about that problem because that gets into some really complicated topics. Any other questions? You had a question. Yeah, from within the container if you want to connect to a host environment in case if you want to set up an XD bug how do you connect to the host environment and set up an XD bug? With XD bug you usually have to use a particular configuration that's using the connect back option which has been considered insecure in general but because it exists on just a container that's only accessible in your system the risk is very low the attack surface is not very large. So there's a way of doing that and I can point you to a file that does that. Anything else? Okay. You had one more question. You had a lot of projects if you were a support agency how would you recommend sharing whether it be Docker files or Docker images what would probably be the best way to do that? So the question is how to share your containers. In general if the containers are very project specific I would put all of the compose files and the Docker files inside of your repository for the project and then whenever somebody pulls you can just build it from there plus it maintains the life along with the project itself and you can do a lot of syncing up externally if you think that it's shareable you can upload it to Docker hub instead. Okay. Thanks everyone.