 I worked with Linux Foundation, I did a couple of big small sites in Drupal and I've been working especially with continuous integration and testing and that kind of thing which kind of led me to Docker. So this presentation if you want to grab it on GitHub, it's on GitHub the entire presentation. My blog is a de-cycle project. So basically, well, there's just two of you, but you can tell me what exactly you want to get out of this? For me, so I come from a background of just DevOps area, so it's more on the lines of implementing continuous integration from clients. But what I have a hard time understanding is how we can make it so that it's easy for Drupal installations in the perspective of having multiple instances in a fleet of servers and not managing virtual machines for every single one of the Drupal instances. Sounds good. I'm not sure I'll be able to touch on exactly that kind of thing. The reason I like Docker is more for setting up throwaway development accounts, but I'll touch on deployment a bit and I'll see about that. How about you? What's your interest in? I'm curious about Docker. We use vagrant and Ansible to manage deployments and we might have a project where we'll have to use Docker. You'll love it. So have you ever used Docker before, any of you? I have. Okay. So we can make it more, since we're like just three of us, making more of a discussion and I can animate and if you want to stop me anytime, no problem. So we're going to look at why DevOps, I'll just kind of go through that fast because I think both of you are already sold on that. Why containers and not virtual machines? You don't need Docker to do containers, but it is better because of the caching. But we'll look at why the difference is, we'll look at item potents versus just throwaway containers. I don't like item potents. I think it's the wrong solution to bad design and I'll tell you why. And look at how Docker uses caching and why that's interesting and look at after that how data is used in Docker containers, one approach to data. We'll look at some demos, we're going to deploy a live site and do some automated testing and things like that. Look at some caveats, things to consider. And if you guys have some questions, just do it along the presentation since we're just three of us. So why DevOps? So at first we had nothing under Git and we were just FTPing into our servers and after that Drupal became under Git, but not the database, we were always cloning this unversioned database every time we wanted to do something. And after that, now this is where most of us are now, some sort of like features based thing with site deployment modules or a way to get features out of the database and into the code base. And the promise of DevOps is to move those last two parts into under Git as well, so to have something where everything's under Git. And that's kind of where Docker comes in, is in those last two parts and perhaps Vagrant and Puppet and Ansible and all of other tools are in that space as well and there's maybe some kind of competition going there. So that's like, today I want to maybe explain why I think in many cases Docker can be a really interesting alternative. So really quickly VMs and containers, just the way they work, I mean, this is the way most of us are going to use VMs, we're going to have, let's say a Mac OS and we're going to have a Vagrant and what's called virtual box. And it's really, when you look at all the resources this stuff uses, there's a reason why it takes 35 seconds to one minute to launch a VM. So basically, it's duplicating a lot of the effort, all the libraries, all of the OS files, they're all like duplicated, whereas the way Docker works, the way containers work and Docker is just a wrapper around containers. So the way containers work is that your application is going to, application A for example, is going to think it runs under Ubuntu and application B is going to think it runs under CentOS, but your core OS, but your host OS might be core OS or something else. And containers kind of manage all that kind of making sure that the applications think they're running in a specific environment, but they're actually reusing a lot of what's common between different OSes. So as you can see, that's a lot more nimble and you'll see in a few minutes how extremely fast that is. So that's the way containers work. And that, by the way, does not work on Mac or Windows. You have to have a Linux box, for example, Ubuntu. Or the one I love to use is core OS. Core OS is like a very, very, very tiny OS that ships with Docker and Git. And that's like the perfect OS. And I have it installed on Vagrant on my Mac. So I never actually develop anything on my Mac. I always SSH into this core OS machine, which is on my Mac. Yeah. So like in a typical cloud-based Docker as well as IBM Bluemix is about to launch the Docker to the OS service. Do you find that there might be a risk of having constraints on what you can do or cannot do because you won't have access to configure the core? I haven't come across any. I know that there's some discussion around that. But when you log into these containers, you really are in your container. You really, everything acts exactly as if you were at the OS level. However, if you want to do some really fancy stuff at the kernel level, then you might not be able to. But in most cases, I've seen that's not a problem. And it's maybe a good thing that you don't have access to that. Is that container and service or are they provisioning you a version of their OS with your own login where you can install Docker to play with your containers? It's like a runtime. So they provide the OS to us and you just deploy the actual containers. That's the problem. I mean, we had a challenge with not Bluemix, but for one of the providers from the clients hosting it. What ends up happening is the OS that we're using has different packages and dependencies. So when we had to push it to their production environment, packages are missing. So they had to deploy that. And one way to circumvent that is having their regular touchpoints on what is the baseline OS are we building on top of and what are the packages that we're installing on top and having that in sync all the time so that their system needs to know what we're trying to do. But why are you having packages on this? Because the only thing you really need is Docker on the base and then everything else is in the containers. The way that our team work was, we were getting our hands dirty in there. So we had a lot of floating around tools just for debugging purposes and like log rotate and all that other fun stuff, which were not at all like, they were all Docker through 1.3, 1.4. Because the way what I would do is I would just have all of that stuff in another Docker container. Like a Docker debug container or Docker development container or Docker production. So all these containers are super cheap to set up and destroy. So I wouldn't do anything on the host OS. Actually my usage of like I have nothing on my Mac and I have nothing on CoreOS either. Everything I do is in containers. Yeah, and it works fine. Containers talk to each other. So we'll move on with data. So in a typical setup where you're using tools like Ansible and Puppet and Chef that are kind of idempotent. So idempotent, the idea is that you describe how something should be and then the kind of system makes it so. So for example, Puppet is going to have this weird kind of ugly syntax here that's going to like make sure that PHP is 5.2, right? And to me that's like that's clunky. It's hard to understand. It's hard to get your head around. It's hard to understand all these new languages. So I try to avoid idempotence entirely. I try to do all my workflow without idempotence. And I'll tell you why in a moment. But the reason we need idempotence is because we can't just get rid of this machine, right? This machine contains data. So we can't just get rid of it and replace it with a new one. So we need to have a mechanism to go in and change little parts of it with what we want. And that's the idempotence. And that's how Puppet works and Chef and Ansible. In the case of containers, containers are actually made to be throw away. So you can just get rid of the container and replace it. So for example, in case of Drupal, the data will be your database, of course, and the sites default files module. Those are the two things that are data in Drupal. So you don't want those things to be stored on the container. You want the container to have access to them, but you want them to actually be stored on somewhere else, probably on your host. You could do it on your host OS. So you would run MySQL instance on within the container? Yeah, all the application stuff is in the container. So the MySQL instance, the server, everything, it's all on the container. Just the database data itself is on the host machine, but not any kind of application to deal with it. That's all on the container side. So when you have something like this, you no longer need idempotence, right? Idempotence becomes irrelevant, because all you need to do when you want to update, let's say you want to update memcache for a point version, you want to update Drupal from 8.0.1 to 8.0.2 or whatever, all you need to do really is just get rid of your machine, destroy it, put in a new one, and link that new machine to your same data, run drushupdb and you're set to go. So you don't need these idempotent tools that are just so kind of clunky and hard to understand. You don't need your team to understand these anymore to do DevOps. So aggressive caching, Docker basically is not going to do the same thing twice. If you launch the machine and it takes 30 seconds to launch in Docker, and you destroy it and you relaunch it after that, it's going to take less than a second. Because Docker keeps everything in cache and it kind of incrementally keeps in memory in every state of the container. So that's kind of nice, and you have this kind of, to me what I find is a psychological threshold of instantaneity, instantaneousness. So if something takes 30 seconds to launch a VM, it kind of feels slow. And developers are just not going to do it. You know, they're going to do it maybe twice an hour or whatever. And they're going to do a lot of development between times they relaunch the machine from scratch. So the machines get polluted with all stuff they might have tried and gotten rid of. And then tests are going to either fail or pass, but it might be because the machine is polluted. It's not necessarily because the test fails or passes. Because it's now instantaneous to use Docker, you can start a new several times a minute if you want. You can change something, restart your, like just rebuild from scratch every time. So you don't have a polluted interface. And you can even do some, what Pantheon does. Pantheon actually builds these machines from scratch every time a request comes in for a website page. So let's say I want the about us page on some website. I type it into my browser. It hits the Pantheon box. And at that second Pantheon launches the container, serves up the page and gets rid of the container. So these containers just become like very throwaway things. You never log into these containers. You just throw them out, replace them. What's important is your data and what's important is the link between the two. So a few demos, name your module, Drupal module. Drupal 7? Whatever, Drupal 8, 7. Views. Views, okay. So I'm going to get views, drupal.org project views. So I'm going to get the Drupal 7 version of views because there's no Drupal 8 version. And I'll show you how you can use Docker actually to do development on views, right? Because don't forget, we no longer have MAMP on our machines now. MAMP is de-passing, right? So we're going to put in views in a specified, this area demo here is a space that's shared between my Mac and my CoreOS machine. So I'm going to develop there. So don't forget I don't have Drupal. I don't have anything on this machine. I did, for the purposes of using Docker, I found kind of Docker hard to kind of get around like the, I'll just show you the interface really quickly of what Docker looks like when you're on a CoreOS box. You can do something like Docker search Drupal. And it's going to give you a bunch of Docker images that are supposed to work with Drupal, but there's no documentation about this anywhere. It's really hard to, it's to find stuff that works. So I built a project that I call the de-cycle box. And I'll actually download that now and I'll show you what it does. So the de-cycle box, what it is, it's a collection of Docker scripts that I've found and I've created which actually work. That I can guarantee they work because first of all I'm testing them to a large degree and second of all I'm using them all the time. So if you want to play around with Docker, de-cycle box is a good place to start. So de-cycle box contains this move to your project route. And depending on the type of project you have, in this case it's Drupal 7 module. I'm going to go ahead and just copy all of this stuff to my views here, right? And I'm going to be able to develop views with Docker and I'll show you how. So this is my core OS box, which is a vagrant box. So I'm going to be moving into views. Docker PS allows me to see what my running containers are. I don't have any for now. So de-cycle box, it's a collection of scripts and I'm just going to run it now. De-cycle deploy. I'll give it a port number, in this case 80. If you want to have different instances running, you can have one on 80, one on 81, whatever. Just because I only have one, so I'll run it on the port 80. And I'm going to run my deployment script. There it's done. So basically what it did now is it provisioned Drupal, provisioned, I think it's a new Ubuntu box. It put in MySQL, it made it put in Apache, it started all that stuff. It built the Drupal site from scratch, it installed views on there and all of that stuff. So as you can see it took one second, or less. Because of its aggressive use of caching, this means that I've done this before some time in the past. I must have developed a Drupal 7 module maybe last week and Docker remembers this. So it's not going to reinstall Apache. It already knows it has somewhere in its diffs that Apache was installed in such a way. So if I run Docker PS now, I see that I have one running container. It has a container hash and you can log into it. So I'm going to actually log into this container now. I'm actually going to, yeah. So this is the IP address of my vagrant box. I'm going to reload that and if we're lucky it should work. So what this script does is it instantiated the Drupal 7 module, Drupal 7 site and it allowed, it installed views on that so I can start developing. So I'm going to run another script I have on Decycle Bot which is called ULI and I'm going to pass it as an argument 717 which is the first numbers of this hash here. 7178 or whatever and if it finds a unique match it's going to log into that container and it's going to create a user hash for you so you can log into this and you can really start developing views doing what you want with it and when you're done you just basically kill off that container. I'm going to Docker kill 717 and that's it. That container is gone. So as you can see every time we interact with Docker it's basically instantaneous. So there's kind of no turning back. There's no going back to vagrant and virtual box after this. You can't stand waiting 45 seconds for a box to spin up. You have it for free here. So another module, D8 module maybe. Do you guys have a D8 module in mind? Okay, I have one. I'll give you a token. So I already downloaded a token and so make sure I have no running containers and what I did with token is I copied and pasted this move to your project root Drupal 8 module stuff from dCycleBox into token and I can run that now. The same thing. I'll run the script oops that's not there. Let me just reload. This happens once in a while. I have to reload the vagrant box. Okay, well, while that happens, let's see if it's quick or not. Oh no, it's because it actually is empty. Okay, I'll just I'll just redownload the whole thing. So token Drupal 8 or maybe token Drupal is better. There you go. And I'll get the Drupal 8 version of token and I'm going to move it here. So this is the out of the box token and just see if my vagrant box reloaded and what I need to do now is actually move in my copy to Drupal 8 module stuff to the token module. So if you want to develop stuff for Drupal 8 really quickly, very easy. You just do it this way. You don't need to install MAMP and all this crap on your computer. So I'm going to head back to my vagrant instance and I just want to show you something interesting. So if I go to share demo, yeah. And so if I go to token, so I'll run the deploy script for token. So same thing. Done. It just installed a Drupal 8 site on, I think it's a Ubuntu. So now I have access to. Sorry, so that script also does the same thing, yeah. Yeah, not just Drupal, but the whole stack. So in this case, it's using SQLite, I believe it's just installing everything you need to to actually use a Drupal. Yeah, exactly. I'll get that to that in a second. Good point. So this is a D8 site actually. So I just want to show you something really quickly. This is, if you look at now, if I go to Docker PS, Docker PS, I'll see that I have one container running on port 80. I'm going to head back to views now and I'm going to run deploy on port 81, for example. And now at this point I have two containers, one running on port 80 and one port 81 and they're actually two completely different servers. So here's my views, development environment on port 81. And here's my token development environment on port 80. And in fact, they're completely different. So I can actually log into these containers. Let's say if I log into the first one and I run a command to get the version of Drush, for example, oh, eight, three, nine. Just stop me if you don't get what I'm doing. And if I do the exact same thing on the other container. So it tells me that I'm using Drush version 5.10 with my Drupal 7 site because presumably this is the version I want to be using for my Drupal 7 site. But on Drupal 8 I like to have the development version of Drush. So if I go 489 on the container 489, if I run the exact same command, I get Drush 7.0-dev. So I can have all these different kind of environments with different versions of all kinds of software, MySQL and PHP and Apache running in parallel. And just get rid of them when I want to. So that's it for the site development, the module development demo. Get rid of these. Yeah, okay, so how about for other stuff? How about for non-Drupal sites? Well, the same thing. I mean, I like Jekyll a lot. Actually, this presentation is written in Jekyll. And Jekyll, you kind of need to have a Jekyll server, and you need to have a Nginx server. It's kind of a complex setup to set this up. GitHub gives it to you for free, but with Docker it no longer matters if stuff is complex because it's just so cheap to set these up. So what I'm going to do is I'm going to go into my presentation, which is what we're looking at now. And I'm going to run the exact same script, which was meant for Jekyll. And I'm going to put that on port, let's say 999. So I want to develop that on port 999. And it's actually building two separate containers. So if I do a Docker PS now, I'm going to have four containers running. I mean, I have two that are serving up my Jekyll site. One that's actually building the Jekyll site. I have an Nginx one that's on Ubuntu that's actually serving it up. I have my D7 on the CentOS and I have my D8 on Ubuntu all containers. Because these are actually sharing a lot of resources, it's really, really cheap to have these running, which is kind of nice. So if I go here and type 999, I'm going to be on my presentation because it just set that cluster of VMs up. How about a Drupal site? Well, a Drupal site is interesting because there's a few things that you need to do on a Drupal site. First of all, the data is different whether you're developing or whether you're deploying to production. I'll explain why. If you're developing a Drupal site and you have a module that you're working on, you want this module to be persistent even if your container disappears. Whereas you don't care about the database. You can get rid of the database and get a new version of it quite easily. Sorry. Whereas when you're in production, what's important to be persistent is the site's default files and the database. So it's kind of different in both situations. So for a site, what I did in DeCyclebox was I have set up two ways, two kind of, two ways of setting this up. One with making the actual files persistent in the config management. The other using the database and site's default files. I'm going to just head over to my, I'm going to get rid of all of these containers because I don't need them anymore. Oops, that's not what I wanted to do. Okay, so I have one container left. I'm going to kill that. There, gone. So I'm going to go back to my demo and I have a Drupal 8 site. Now, if you guys want to take a look at this, it's actually quite easy to do. If you go to DeCyclebox, if you go to the DeCyclebox website, which is box.decycle.com, let me just see if I can, if I have it here somewhere. No. So right here you have to get a feel for how this might be used on a D8 project. You can click there and you have a GitHub project that actually, with instructions on how to set it up. And what's great with this is that you can actually set this up directly on a CoreOS machine in the cloud. So that's actually what I did. I'm going to just log into DigitalOcean here. So I have this box called demo with today's date. And I can actually log into that. And this is actually a D8 site that I built from scratch, which is not very nice. It's not a designer, but it's still set up. And the reason I put a picture there, and I put a note there, is because I want that information to be persistent when I get rid of the box. And I'll show you how that can be done. I'm going to log into that box. I'm going to SSH into that external box. So I created this box. Okay, now I'm in my box. So this is basically what you have on the GitHub account. So if I look at what my Docker containers are on this remote DigitalOcean instance, well, I have one Docker container. I created it two hours ago. And it's running what's presumably important live data. So these kittens, and this is stuff I want to keep. But the box itself, if you remember correctly, and this is really one of the main ideas of Docker, the box itself is a throwaway. We don't care about the box, because we can rebuild it in one second whenever we want. The data we care about. So we need to separate those. And that's what I did on this machine. So I'm going to get rid of this box. I'm going to kill it. Just for clarification. When you meet the data that's stored in the MySQL, it's stored on the file system of the CoreOS. Yeah, exactly. In fact, anything I find is important on my CoreOS container. I need to specify in my CoreOS files that these need to be on my host. So I'll show you really quickly what these files actually look like. So in the case of my Drupal 8 site, I have two Docker files. This is really the heart of Docker. So there's not much to it, actually. I'm just starting from a Docker container that I know that works. You can start from anything. And what ad says is that I want to take what's on my host machine and add it to my container, and I don't want them to be synced. So in the case of my production site, all of my modules, all of my themes, I want these to reside on my container, because I know that on my production instance of Docker, I'm never going to be actually changing anything on my modules. I'm going to be doing this on dev. So I don't care if they're added and then they're not synced. It's a lot faster. However, there's another way to add stuff, which is the V argument, which I'll show you. So in the case of production, when I'm Docker running, this is the command that's interesting here, when I'm running an image, a Docker image to create a container, I want to actually synchronize stuff between data, dash project name, whatever, dash files on my local system, and SRV Drupal WW site's default files on my host system. I want these to be synchronized at all times, and I also want the DB to be synchronized with the host. So if I go back to my... Well, I'm going to actually delete this. I'm going to kill this container. Oops, Docker. Kill E4G. Yeah. So now if I log back into Docker, if I log back in here, it's going to tell me there's nothing because that container is gone. But the data remains. Why did the data remain? The data remains because I told it I wanted to synchronize it with my host system. So if I log back onto Docker, onto CoreOS now, and I go to data, I see that this file system... Oops. What did I do? Oh, yeah. I wanted to cd into data. This contains my database and my files. What is my database? Well, I can... It's actually a... It's actually a SQLite database. Right? But you can also do this with my SQL database. And my file is my size default file. So all of that stuff is safe on my host machine. So my containers, I can get rid of them whenever I want. And when I do decide that I want to recreate this machine or perhaps an updated version of this machine, I could just recreate the machine. And if it's out of sync with the database, I just run DrushUpDB on the machine. And there you go. Right? So Eeprod just tells it I want to have a production environment and not a development environment. So now it's actually checking. I have a script that actually checks, okay, I want to rebuild a new container around this data. And so you see no database update required. Well, that means that the script actually knows that the database might be out of sync. So it's checking if it needs to run DrushUpDB or not. And it realizes it doesn't, so it's happy. And I can go about my business now and revisit the site. So the site sounds for a few seconds when I do updates, and that's the only thing that happens. Back to the demos. Okay, so you have to do a deployment to production. So right now I'm going to just head over to, I'm going to head back to my local CoreOS site. And I'm going to go to my Drupal site. I'm going to doc, I'm going to run my deploy script, but for development, right? So now you see it's not doing a DrushUpDB. It's doing setup initial because I don't need a DrushUpDB. It's just setting up the initial site basically. So it's doing all the kind of stuff that it needs to do to create this website from scratch, including if you're familiar with the configuration management system in D8, imports all the basic theme and all the views and all that stuff. And we're going to see in a moment how I can actually go ahead and do some development on this. So I understood that the CoreOS host file system, as your persistent data, for your files, if you're not syncing, then say for example, from the front end, my user goes and uploads another file. That file will be stored in files somewhere in Drupal. Well, in Dev that's what happens. So if I'm developing this website and I add temporary nodes and files and whatnot, as soon as I destroy this container, that those files, those things are gone. Oh, that's why. In production, they're synced. They're always synced. So as soon as I upload a new node, my database gets updated. When I upload a new image, that image gets stored in the file system, which is also synced. So they never get out of sync, these things. So I now have a development environment, a D8 development environment here, which is on my local Docker machine, which is, I believe, here. And now I want to add, let's say, some sort of something to my website, for example, a vocabulary. So the first thing I need to do is go ahead and log into my machine. Oh, I need to give it a container name I want to log in as 4b2. And this is Drupal 8, so there's no more features or anything like that. I'm just going to log in. I'm going to change something about this site. Perhaps I don't actually like this theme that much. Maybe it was a bad idea, so I might want to go and change that theme to something else. So I'll maybe change it to something minimalist. I'll actually change it to this here. I'll change it to this. So it is default. Kind of ugly, but that's what I want. And if I go back home now, so this is the development I'm doing. I can also add views and vocabulary terms and whatnot, but this is the only thing I wanted to do for now. So once that's done, oops, system rebooting five minutes. Okay, so that threw me off. So basically now what I want to do is I'm going to run another script called export configs. Oh, tell it which container, 4b2. And it's going to basically grab that new configuration so you can see system theme up there. It's updated. So it knows that stuff up there. So anything you do basically get views, anything at all, this script grabs these and puts them in the configuration store, which is in the case of a dev environment, synchronized with my CoreOS machine. So if right now, if I head back to my Mac and I do get status, so my system theme YML has changed and if I do a get diff here, it's changed from major to classy. So that's cool. Do I actually want to set that on a master branch? I'm going to actually check it out on a new branch. I'll get commit that demo with different theme, get push origin demo, right? Oops, there you go. Okay, so I'll head back to my, this is my production site, right? So again, using Docker, I'm going to log into my production sites. This would actually be your continuous integration server doing this, it wouldn't be you. But for the purpose of the demo, I'll do it myself. And this container here contains my old theme. I don't want to log into that and change the theme. I don't care about this container anymore. So what I want to do now is I want to just delete the container, right? And at this point, I'm going to be able to deploy my new container on port 80 with my production script. No database update required. If I hadn't had a database updates, it would do them here. So if I'm, for example, upgrading modules or whatnot, I can do the database updates here. And at this point, when I log back into my remote core OS, I'm still logged in because don't forget, everything is the same. So the cookies, everything remains identical. Except, nope, didn't seem to. Well, it should have done it. Anyway, live demos never worked. I have to apply. Yeah, it's supposed to do that automatically. I don't know why it didn't. Oh well. But the idea is, the idea is, oh, you know why? What would you say? No, no, that should do that automatically. But the reason it's not working is I used the wrong branch. I should use the branch demo. All right. So what I'm going to do is I'm going to get checkout demo. I'm going to get pull origin demo. Docker PS. So see here, I made a mistake. Well, it doesn't matter. I can just kill it. So it's super cheap to create these containers. Perfect example. Yeah. So there was a good example. I'm just to recreate it. It takes a few seconds. And now you're going to actually see it going through the update process where it re-imports this new configuration. Because don't forget, you don't want to do anything in the user interface. Right? This is all your continuous integration server doing this stuff. So you can't log into your interface at all. Yeah. Cool. I'm not sure why this one is taking so long. Because it's all in cache. So I don't know why that's... Maybe I did something wrong. But anyway. Huh? So there you have no database updates required. But you will see it probably... Yeah, you see there, system theme update. That's what it's doing. So and there the new container is done using the old data. And I can now log back into this thing. And I'm going to have whatever changes I made. In this case, it's just changing a theme, but it can be anything. And the important thing is using my old data also. So no item potents. You don't need to wrap your head around all this ansible and puppet and chef stuff. Which to me, if I'm completely opaque and incomprehensible. You don't need that stuff. All you need is throw away containers. That's the idea. That's what I love about about production. Yeah. That's a good point. That's actually a really good point. So your Docker files like really often. I'm going to actually show you a Docker file. That's more kind of representative of what a Docker file may look like. This is by the way, this is a puppet file. You don't want to get into that. It's just too difficult. So here's what a typical Docker file may look like. So in this case, from basically the base box. There's lots of base boxes going around. In this case, I decided to just start from Ubuntu. Straight up Ubuntu. And I'm basically doing everything I need to do to set up a D8 site. So your question basically is when you go ahead and install, for example, Curl or PHP 5, Curl or at some point I'm installing PHP 5. So on line 17 I'm installing PHP 5, for example. At this point, the idea, what you're saying is when you're developing locally, the PHP version that kind of ships with Ubuntu on this package manager is PHP 5.2, for example. And when you do it in production, it's 5.3. That's a really good point. And what I try to do as much as possible is to use a Docker hub. So it's to build these Docker files. Like, for example, this one, I built it. I gave it a version and I throw it on the Docker hub. And that is never going to change. It's never going to be rebuilt. And then on my local Drupal 8 site here, my Docker file is actually using the image that's actually the binary from the Docker hub. So it's never rebuilding it. That's one way I'm getting around this problem. The other way I'm getting around this problem is visible, for example. Yeah, this is a good example here, line 36. So in this case, I'm actually, I don't want to just Drush DL Drupal. Because Drush DL Drupal doesn't tell me which version I'm using. I want to actually download a specific version of Drupal. And ideally, a specific version of PHP and any other stuff I'm using. So ideally, you want to encapsulate the versions of what you're doing in there. And when beta 11 comes out, well, you just change beta 10 to beta 11. And you throw away your old server completely. And you recreate a new server from scratch, but with your new version of your image. So that's something that could happen. And that I've seen happen in the past. That you have different versions and different builds of your Docker file. And those are the two kind of techniques that I decided to use to get around that issue. The new image wants to always re-use that image until you know you have to upgrade it. Yeah, that was kind of the idea. All this stuff is experimental, by the way. So I wouldn't go around using it in production. But it's kind of just a way to get the feel for the technology and see how it could theoretically be used. Next question. Would you be comfortable using the mission critical production environment for a big client like STM, for instance? I probably would. But being aware that it's an experimental kind of thing, right? I probably would today, yeah, myself. But we'd have to make sure that the stakeholders understand that it's kind of a new technology and it's not used in production by a lot of people. But it has advantages. So you can take the time you save by using its advantages and kind of use that time to make Docker better and more secure and so on. And I would probably do it for sure. Shopify uses Docker and they blog about it and so on. So last thing I want to show you is CI. So I'm going to head over. Well, you're going to have problems with Vagrant and with Puppet and with Chef and with just having some guy in a corner office logging into the production server and playing around. Whatever you do, you're going to have problems. I think Docker is in a position where it's kind of stable enough. I know a lot of startups that are actually making money from products, not from Docker. They're actually using Docker in production. And the common issues they run into is it's not Docker functioning, it's some migration shift happened, configuration shift happened. Right. Yeah. And just everything is under version control. And it's just kind of, I don't foresee any issue. There is the kind of technical problem that Docker runs as a route. So you could theoretically have different containers access each other. It's a theoretical problem. And the guys at core OS are working on an alternative to Docker called Rocket. So and Rocket actually runs under different users and so on. It's supposedly more secure. I don't understand the security concern myself. I maybe not, I just kind of don't get it. But I don't see how it's possible, but apparently it is. So that's what I want to show you is just the, so if you were to do it at the, if you were at the presentation about, about Travis, yeah. You saw that they were using Travis. I don't use Travis because Circle uses Circle supports Docker and Travis does not. So I use Circle instead. And in this case what I'm testing is it's the same idea basically. I'm just testing, my test is basically building my Docker files, making sure they work. So you can run this test on every commit. And when you log into Circle CI, actually I have it here. I can see that, for example, my, this thing is actually success. And you can see what the, what the continuous integration server did to run this. So for example, building my production site, you can actually see that, you know, it grabbed the, it grabbed the base image correctly. It managed to, to deploy Drupal and so on. And there's no errors. So you can quite easily integrate this with a continuous integration server without installing any software in a continuous integration server. So your development box, your production box, your CI box, your testing box, your stage box, the only software you need on these things is CoreOS. One piece of software. And the idea, and the idea basically is, this is, this is the problem that we want to solve actually. So we have all this stuff we want to move around. And we have all kinds of different ways of moving it around, shipping and trains and so on. And it just becomes kind of a nightmare. You know, you have, you have to install WordPress on Ubuntu and you have to install Jekyll on, on Windows. And it just becomes this whole nightmare, this whole matrix. And the idea of Docker is really to, to kind of do what the shipping industry did, was to kind of invent this kind of container, really. It's really like a shipping container. And you put all this stuff in the shipping container. And the people who are moving this stuff around don't need to know what's inside them. CoreOS doesn't need to know what's inside your containers. It manages containers. When you're inside a container, you don't need to know who's moving you around. You just know that you're inside a container and they're always the same. So that, that kind of level of abstraction is extremely powerful, right? So I'll just finish this. So that's it for my demos. You know, Docker doesn't work on Mac or Windows. Okay, fine. But CoreOS is such a tiny OS that you can just install it with Vagrant, super easily, and it just works. So forget Mac or Windows, just work on CoreOS. Docker runs under root, like I said, it's a security issue, but you have, you have Rocket being developed. And you know, this, this, it's a good idea to start working with Docker now and when Rocket comes out, probably switch then. So this is what it, stuff was like before Docker and stuff is like often now we, it's just a pain in the ass to put stuff on, on servers and use Puppet and Chef and all that stuff to manage these things. It's just really a big pain. And after,