 said grep orc said grep orc said grep. Don't worry, that's just an old eunuch's ritual to appease the demo gods. They like the sacrifice of pike place roast. Welcome to captaining a container ship, docker orchestration with container. Docker orchestration, what's that for? Well, orchestration is basically running docker containers at scale. If you are only running single containers, that's still quite easy. You can do everything by hand. Docker gives you a lot of tools that you can use to download images, spin up containers, check if they're running, and shut them down again. Who of you is already at that state, running a handful of containers, spinning up one from time to time for development or testing or things like that? Well, the probability is high that you like docker because it's really easy to get into and to get started. And when the number of containers grows, things are getting much more complex. So if you're really more at that stage, you can't do any manual stuff anymore. And you need to have solutions that will help you maintain all these containers, all their relationships and stuff. And if you don't learn to use these tools, these orchestration tools, you might end up like this. So who is at that stage? Let's prepare you for it. Why orchestration? As soon as you've started using docker in earnest, there will be more than one container because if you are following the docker philosophy of one single application per container, you'll end up with groups of containers. And these containers all have to be managed. You need to spin them up, you need to shut them down. They need to be distributed if you like to, for example, use different resources for different applications. So let's pick an example. Running a web server like Apache will require much different resources from your infrastructure than, for example, running a MySQL container. You might want to run MySQL on a host that uses SSD-based storage. You might want to run a memcached container on a machine that has lots of RAM. And so you have to select the right infrastructure for the right container. They need to be scheduled, which means they need to be spun up. And if they go down by accident or because you want them to, you probably want to start them again sometime. They need to be load balanced, especially if you are using the distributed nature of these things, you'd like to be able to distribute load between multiple Drupal containers, for example. And that gets quite complex as soon as the number of containers grows. Then there's all these dependencies between containers. If your Drupal container, for example, needs to talk to the database or needs to talk to memcached, there needs to be some way of these containers knowing of each other. They need to be linked or connected by other means. If you have outages and say one of your container hosts goes down and you spin up another host with a new memcached instance, for example, your Drupal container needs to discover that the old memcached instance has gone away and that there's a new one to replace it. Another thing that gets quite interesting quickly is sharing secrets between these containers. For example, if you are creating a Drupal database, the database server that creates it needs to use the same user account and password as your Drupal container that wants to connect to this database. How do you make sure that, for example, if you change the database password and spin up a new database that your Drupal containers will automatically use the new password? If you are doing this by hand, you'll get into a situation quickly where things don't work anymore because you forgot to update a certain setting here or forgot to replicate a configuration file to another location, things like that. That's where orchestration will take a lot of work off you. A simple incarnation of this is included in the standard Docker application suite. It's called Docker compose and it lets you spin up multiple containers on a single host, normally your workstation, and takes care of all these things like links and declaring shared secrets is easy because you only have one single file where you declare all these things since everything is limited to a single machine. As soon as you start spreading out, especially if you want to run a production infrastructure, you'll need something like container. Container is as the company behind the application also called container says is an open source container platform built to maximize developer happiness. Works on any cloud, easy to set up, simple to use. That's why I immediately started to like container when I started using it and dabbling with it because they keep their promise. It's easy to set up, it's simple to use and it still can do a lot of stuff. So why choose container over other alternatives? Container is simple. In this talk I'll show you how to spin up single containers and a whole stack with a load balanced Drupal installation. If you use the demo script that I've made available, I'll show you the link later, you could do from zero to running Drupal infrastructure in five minutes. It's inexpensive since it's open source software. It's full featured in the sense that it has everything for you to go to the next level. If you are still using the Docker command like Docker run and stuff or even Docker compose, it will be just an easy step to get into container and start using a multi-host infrastructure for your containers. It's production ready. People are using it in production and it can handle production load. It's secure. It supports a lot of stuff that you need to run secure web applications and it's flexible because it is very adaptive. You can install it in less than one hour. Everything comes bundled of course in the form of containers again and it simply builds on the YAML syntax that Docker compose uses and it simply extends it to be able to run on multiple hosts. There are alternatives of course and you probably know a few of them. The most prominent is Kubernetes or there's also things like Mesos but when I started looking into these things last year, it felt like using these alternatives is like if you want to start baking cupcakes, the recipe would start with first let's build an industrial oven. Container isn't like that. Another thing that makes container inexpensive is that you can even use let's encrypt out of the box so you don't even have to spend money on SSL certificates. It comes with all the building blocks you need. It has its own private image registry so you don't have to put your Docker images into the public Docker hub. It comes with a load balancer that's amazingly easy to use. It supports service discovery. It has its own secrets storage where passwords and things like that or SSL certificates can be stored securely and of course there's an unwritten law in IT that all secrets storage have to be named vault because that makes Googling easy. And it has its own key value store so if you need to store your information about your infrastructure, about your application stack in a central place, it has that as well. Container has well thought out user authentication. It uses OAuth for user authentication so that the people that use container can authenticate against a central user registry. It supports health checks that are both used in load balancing so that traffic doesn't go to a container that has just gone away. And it also uses it for scheduling because you can define things like I'd like to have three Drupal machines or three Drupal containers and if one of these three containers goes down, container will automatically take care and make sure that another container is spun up. It supports stateful applications which means applications that store file data locally. These containers aren't as easy to relocate on other machines as a stateless container and container does support that. It has a real time logging and statistics engine built in and it allows to view an audit trail so you can see at any time who did what to the infrastructure at what time. That brings us to the topic of security. All container managed containers are located in virtual networks that have their own IP address space and all the traffic between the containers is encrypted automatically. In order to access these containers directly, container allows you to connect via common VPN software where you can access the encrypted network and for example push images to the local private registry that's in the encrypted network as well. This image shows the platforms that are supported by container. You can run container using Amazon web services. I'm using it here with Google. You can run it on your own on-premise infrastructure, for example on Rated Linux or Ubuntu. Container is really very flexible in terms of infrastructure and it allows you to even build hybrid infrastructures that run in part, for example, on AWS and in part on-premise. Before we get into the practical stuff, just a few words about myself. My name is Jochen Lillich. I'm chief everything officer at Freistil IT. Freistil is the German word for freestyle and is pronounced exactly the other way around. I've been told that there are people in the USA that like to fry everything. I don't know why you wanted to fry steel but that's exactly how our company name is pronounced. On Twitter I'm Gvis and you can find my email here. If you have any questions about this talk or everything else, simply give me a shout and I'll be happy to respond. Our main product is Freistil Box which is a managed hosting platform specialized in running business critical Drupal and WordPress websites. We started in 2010 with a 100% specialization for Drupal and in 2013 we added WordPress support as well. We limit ourselves to these two content management systems because what we do is we implement a DevOps workflow with our customers. Other than with common hosting providers, you have direct access to our engineers and they complement the development teams of our agency customers during the whole application lifecycle. You can of course simply get a Freistil Box plan and run everything yourself but the magic is having basically a no ops scenario for our customers where we take care of everything that needs to be done on the infrastructure side and developers can take care of the application. One can't go without the other though. Just this morning we had a conference call with a customer who had trouble using their database and we did an analysis of all the database queries and found that in 24 hours their database queried, their application queried billions and billions of database rows and we worked out with the customer why that was and what to change in their Drupal application. So our database got to work much faster again and their application as well. So we have the necessary know how on the Drupal and WordPress internals that we can help our customers and really make a 100% complete DevOps cycle. Back to using container. The basic container setup is very simple. There's a single container server that does all the management and then you have the container nodes each running the container agent that communicates with the container server and execute what the server tells them to do. So the server controls the whole platform. You get access to the server via OAuth. You can either use the OAuth provider that container runs on container.io, cloud.container.io to be precise and you can also use your own OAuth provider if you so want. Creating a container server is easy. You simply install the container CLI Ruby gem which provides you with the command client client and then you execute a command like this. Here I'm using the digital ocean plugin to spin up droplets on digital ocean and I basically tell container to start a droplet, deploy the container server and name it container dcl. Use my digital ocean token that I've stored in an environment variable. Use the region London one, my SSH key, the droplet will have the size of one gigabyte and I can use the container cloud for authentication. So let's see if my sacrifice did work out. Looks like I'm still connected. So this will take a minute. It'll first spin up the droplet and then deploy all the software that's necessary to run the container server which is quite a simple thing. The container server application is written in Ruby and consists of the web application on one side and a MongoDB instance as its back end and that also makes it quite easy to run a high availability infrastructure by simply using the common techniques to get redundancy for a web application and get redundancy on the MongoDB side. Come on, digital ocean. As long as the thing is spinning it's not the Wi-Fi. I've connected via mobile data so I'm not dependent on the Wi-Fi. Two commands to be honest. First of all install the CLI. So it's simply gem install container dash CLI and I had to set the digital ocean token in the environment variable and that was it. And I've put the demo script on GitHub so you can take a look at it later. So now we have our master server. And as you see it's created the droplet, then it installed the software and also created a grid. Grids are separate groups of different types of container nodes of host machines. And each grid has its own encrypted, oops, sorry, encrypted overlay network. So actually each container grid uses the 10.81 IP address space I think if I recall that correctly. But since everything is contained in this virtual network they don't get in conflict with each other. So you can run as many grids as you like and each grid is basically its own self-contained universe. And you can, if you like, connect to each grid via VPN. The container CLI has a VPN sub-command that spits out an open VPN configuration that you can use right out of the box. So if you don't want to use the test grid that has been created automatically you can use the container grid command to create your own and to tell container that all subsequent commands are, will be associated with this grid. And I also can tell container what the initial size of the grid will be and a size of two is the minimum required size because container uses a quorum-based database for both service discovery where all services that come up get registered and can be queried and for the key value store. And in order to have always a quorum you need to have at least three nodes. And in that case if you start with two nodes you'll have two nodes and the container server itself and that gives you the minimum of three nodes in the quorum. Come again? Yeah, it's automatic. So the question was, is Etsy D run on the master as well? It is, yeah. That's just the name of the grid. That's arbitrary. So let's see. So that runs pretty fast. I simply spin up another grid called demo grid and with container grid use I've told the CLI here to use that grid from here on. Let's get to the nodes. All these nodes are discovered automatically as soon as you spin up a new node, a new container agent, it'll automatically connect to the container server and it will keep this network connection up. So container doesn't require open firewall ports on each node. All the nodes simply need to be able to connect to the container server and the API that runs on it. It's easy to create nodes as well. There's another command container node create. In this case I'm using digital ocean again with my token and the essential details that I need to use. Let's see how that works out. In this case I'm spinning up two nodes one after each other. Yeah. I've created a Drupalcon page on the frystilbox.com website where you'll find a link to my session page for Drupalcon where you also can download the slides and have a link to GitHub where I've published the demo script that I'm using here just to avoid typing errors. Of course the token that's displayed here isn't my real digital ocean token. I accidentally hard coded the digital ocean token in my first instance of the demo script but I've changed that to use the environment variable and I've made another commit on GitHub so the token thing of course as everyone knows is resolved. No need to look at previous commits. So that was node number one. Let's get node number two added to the grid and we'll be good to go. By the way you might have noticed that I have a very confusing accent. That's because I'm a German living in Ireland. So I'll just go to the next slide while that is working in the background. So we are all good to go and the first thing that we can use now is container services. A service is basically an abstract name for a container image that runs somewhere in our infrastructure. So you can define a service in a simple YAML file and that's where it's container is very similar to Docker compose and it simply builds on that syntax. You can define which container image you'd like to use. You can define things like volumes for file storage. You can limit resources for example how much memory the container is supposed to use. You define if the service is in some kind connected to another service. You can define environment variables that are going to be used by the container image. You can define secrets. That's something that container adds and the registration happens automatically. As soon as you start a service it gets registered with the central database and other services can query. I'd like to connect to a service named MySQL where do I have to ask. You also define the deployment strategy that can be one of three variants. It's HA, demon or random. Random simply takes the container and spins it up on a random node. The high availability setup is where you can define okay I'd like three instances of this container at any time. And demon will install the container image on all nodes. That's very handy for example for infrastructure services if you'd like for example to run a monitoring application on each of your nodes. You simply tell container to run it to deploy it with the demon strategy and as soon as you spin up a new node you'll have another instance of the monitoring service running on it. You can define affinities to other services so you can tell for example always run MySQL on the same host say memcached or something like that. And in order to have high availability you can even define a port to wait for the container. So if you spin up a Drupal container for example you can say okay starting the container isn't quite enough. I expect this container also to answer to respond on port 80 for example and only if that's the case I consider this container as successfully started. And you can define health checks that will be executed periodically and can for example be used for load balancing. So that's an example for a stateless service. It's very similar to the run of the mill docker run command. You'll simply say okay I'd like to use the latest image for engine X and use that as the service also called engine X and please map the port 80 of the container to port 80 of its host so I can talk to the container. And container will probably spin that up somewhere. So let's see how that works. I hope we are ready. Here's a list of our two nodes just to make sure okay there are two nodes both running on digital ocean in the London data center. And now we are creating the service that didn't spin up a container it's just the service definition. And with the following command I'll actually deploy a container that will be spun up and can be talked to. That's that and the CLI tells me I created this container on node one. If I want to see more details about this service I can use the container service show engine X command and it'll list a lot of details for example in the lower part on which nodes the service is running and what the public IP address is. So if you would try to connect to 138 and so on you'd be talking to our new engine X instance. With the stateful option I can spin up another container and the only difference would be that once container has chosen a node for this service it'll keep using this node. Even if I shut down the container and two days later I'll spin it up again I can be sure that it will be created on the same node again because container expects the container to use a file volume on this node and if the container would be spun up on another node it would lose all its data because the file data stored in volumes doesn't travel with instances. It's easy to scale up a service so if I for example would like to grow from one single engine X instances to three I'd simply use this command that's the public IP address again and now I'm scaling engine X up to two and then automatically show the details and what we are going to see is that engine X will now be on two nodes and available on two public IP addresses. Let's stop that right away so we can go on to more interesting things. Things get interesting when you start connecting services and these groups of connected services are called container stacks. There are sets of services defined in a YAML file and this YAML file will be versioned so you can update your stack definition and work with different revisions of this. Each stack gets its own subdomain local in the grid it's running in which makes addressing parts of the stack quite easy. So let's take a look at, sorry, yeah. So if you have a stack defined in say here the container dot YAML file you can install this stack definition and name it for example Drupal and let's take a closer look at that definition. The YAML file will be in the GitHub repository as well. So it starts with a preamble I've called this stack examples Drupal and gave it version one and then I start listing a number of variables that I'll be using during this file and that's quite interesting because here I'm using the container vault for secrets management. And it's amazingly simple. I'm defining in this case two variables one called Drupal MySQL root and a bit below Drupal MySQL password. Both are of type string and I can tell container where to get the value for this variable and where to store the value of the variable. In the case of from where I get the value of the variable from I can tell container to use two alternatives I prefer getting the value of the variable from the container vault with the key Drupal MySQL root. But if it's not there for example because I'm just starting to show people at Drupal on how this works it'll alternatively fall back to a random string with a length of 32 characters. And regardless of where I got the value for this variable from I'll store it in the vault under the same name. And with the second variable I'm doing exactly the same. So that makes things very easy. I simply take a container well I'd like to define a variable Drupal MySQL root. You should be able to find it in the vault under the same name. If not simply generate a value and then make sure to store it in the vault so we can get it next time. And the services section is where we use these variables. I'm defining a service Drupal using a image. I'm using the official Drupal image in version 8.2. I define it as stateful because I'm using a number of volumes. I'll expose port 80. I'll define an environment variable named MySQL user with the value Drupal. And I'm using a number of secrets. In that case the Drupal MySQL password secret. And store that in the environment variable MySQL password. And then a number of volumes. So the more important parts of my application will be exposed in single volumes. And then I'll add the other necessary part that the MySQL service with the MyRIDB image. Equally stateful. A few environment variables as well. These should be in sync with the variables I'm using in Drupal. Otherwise the two containers won't be able to talk to each other. And two secrets that are necessary to spin up the necessary databases. Let's see. Okay. Let's hop right to load balancing. If I'd like to run more than one Drupal container and have load balancing. There are only a few minor changes I need to make to my definition. I add a new service called Drupal LB that uses the load balancer image from container and uses port 80 as well. I'll add an instances value to Drupal. In that case I'm spinning up two instances. And then I have to define a few additional environment variables that tell the container to automatically connect or other. The load balancer container will automatically connect to my Drupal containers in that case. And that's happening because I link these two services via the links statement. These are official, these images I'm using are official Docker images that will be pulled from the Docker hub. They are public. So let's see how that works. First of all, I'm deploying my stack definition. And since I'm using the deploy option to the install command, it will not only store the stack definition, it also goes right to deploying it. And that's why it started to deploy the load balancer. Now it's deploying the MySQL service. And now it's deploying my Drupal containers. That's a great question. I've actually left that out yet. The question is, so if I start up a few nodes, they get somehow added to the load balancer. And if at a later time I spin up another node, will that also be automatically added to the load balancing? And the answer is yes. What this load balancer image does is go through all containers that are linking to you, look for these special environment variables that tell you how to behave and then do it. And container is using HAProxy for that. And the HAProxy is actually talking to the HCD. And as soon as a new node spins up, linking to the load balancer, it will get all the necessary details from HCD and add that to its load balancer configuration, reload the configuration, and within a fraction of a second, the new node will be in load balancing. And if you have defined health checks, they will also be applied. So for example, if a container doesn't answer for five seconds, it will automatically be removed from the load balancing. That's quite ingenious. And it's really as simple as it looks. There's nothing I have done in the background that is necessary. Now that everything is deployed, we have a Drupal cluster of, in that case, two nodes behind a load balancer, and both Drupal nodes automatically talking to the MySQL instance. And if you like, here are the different services. So we see two Drupal instances, one MySQL and one, two Drupal LB instances, even for, I think I've installed that load balancer in the demon configuration, so it's automatically spun up on all nodes I'll ever have. So I might be able to do something to make even my load balancer redundant. And if you like, connect to one of these public IP addresses, which are the IP addresses of the load balancers, and you'll be talking to a Drupal 8.2. Of course, it's a newly installed Drupal. Could you repeat the question, please? Yes, it is. The same file system on all the nodes, so I can actually spin up my Drupal containers anywhere I want, and I'll find the same files. At the moment, I'm simply deploying them as stateful, but that won't help me because files written by one node won't be accessible by the other node. So that setup isn't actually production ready. What you can do is use a central file system, NFS or something else, a distributed file system, and then import that as volumes. Container won't know anything. It'll simply use these paths you defined in the YAML file, and you'll have to manage the shared file system yourself. I'm pretty sure that sometime this year, container, the company will add shared file system or volume migration or something like that. You can use these container images to build a replication setup with MySQL, but you'll have to do that yourself. Container doesn't provide you with anything like that. So just a quick look at SSL because that's equally easy. You can use Let's Encrypt. You simply say, okay, I'd like to register at Let's Encrypt with my email address, and I'd like to authorize to get a certificate for www.example.com. I'll get the necessary DNS authentication details that I'll have to add in a DNS record, and as soon as that's propagated, I can use container certificate get and I'll get my certificate. That certificate will be automatically stored in the container vault, and then can be used with the container load balancer simply by adding these additional settings. Now the load balancer should be talking on 443, and I'm using the SSL certs environment variable, which will automatically be used by the load balancer image, and in that case I'm using the value behind LE certificate underscore domain name underscore bundle. That's the default name that container will store the certificate under, and I'm good to go. It is. That's why, so Let's Encrypt is free. You can use that out of the box without paying anything, and you'll get a valid certificate. And I don't think Let's Encrypt supports Wildcard. You'll have to create certificates for each, for each, for each, for each of the distinct certificate. So the projector is giving up. Looks like it. But just to summarize, container really is simple. It's inexpensive, open source, and Let's Encrypt. It's full featured, except shared file system. It's production ready. It's secure. It's flexible, and most of all it's really worth a try, because it doesn't cost you anything in terms of money, and within one or two hours you'll have a feeling if container is something for you. So if you'd like to take another look at my slides, simply go to www.frysteelbox.com slash Drupalcon.html, and there will be links to everything.