 All right then, let's start in time Good morning Docker, right? so Who of you has yet to use Docker in some way who's in a kind of trial phase with Docker? Who's using Docker on a regular basis? Okay, so Who's not familiar with Docker at all? Okay So I'll just go a little bit into the basics, but not too much. So Docker is a way of deploying software And if you know what a virtual machine is You know what Docker is not Because Docker is much more like a software package The things we have been installing on our Linux distributions for for decades now Not only with an additional kind of runtime Aspect to it so What some people do is deploy Docker containers With all the software that an application needs they for example a hypothetical Drupal container would contain the PHP files of Drupal a web server the database and Other things that are necessary to run that just like we did with the virtual machines all the time But If you run Docker In that way You're doing it wrong These single container things aren't the way Docker is supposed to run Docker is built for a single application. So in the Drupal scenario, you would have a web server container connected to a database container and Maybe additional containers around that and these containers would interact and In the end deliver the Drupal website Just had to wake up So what Docker Does is it's a runtime that Can download and start a container image containing the software you'd like to run and then Give you tools to interconnect containers together to build your application stack Now my talk will be about orchestrating Docker running multiple containers Why would we need that? Well with Docker there is Never one single container It's always things that work together to containers 20 200 and As soon as you get into these multitudes you need some kind of tool that will help you Manage all these containers and what you get from bare Docker what you can download from from Docker.io Won't help you anymore In an infrastructure, especially your production infrastructure will have Containers popping up everywhere containers going down Having to be replaced and For that you need some kind of tool that can keep an eye on your containers and say okay One of my three web server containers just went down for some reason so I need to spin up another container somewhere to mitigate that and That's what orchestration software does Schedule means picking the right host for each container. So for example, you might want to run your Database containers on a machine that has SSD drives is very powerful has lots of RAM on the other hand Maybe you'd like to run a file storage container On a machine that has lots of disk space Which would be too expensive You want to have that automated so you can say okay spin the other database container and the orchestration software already knows okay For database containers, I'll have to pick a machine that has SSD drives Or that has at least five gigabytes of RAM Available or something like that and then you have all these dependencies that I mentioned With your web server container running your Drupal Being able to talk to a database container being able to talk to a memcache container or a Redis container so these links need to be established between containers and For that to happen my web server container needs to find out where its database is and Since we are constantly spinning up and spinning down containers That might change. So my database container might have run on one host yesterday, but on a different host today and That has to be visible To my web server container There's also the issue of shared secrets. So my database container needs to provide a database for Drupal For that it needs a username and the password The database needs to be created on the database container and The user account needs to be created there too, but my web server container that runs Drupal Needs to have these credentials too That's all easy as long as you run Docker on a single machine for example on your laptop Because everything is local it's clear that for example web server and database run on the same machine and Even the basic Docker tooling gives you Things that you can use to link and contact connect these containers You simply use environment variables to Give things like secrets to a container and you simply use the same variable for the database container As you use it for the web server container. So locally, that's not much of problem and with Docker compose things got even more Simple because what Docker compose does on top of basic Docker is you'll simply write a YAML file that contains all this information So spin me up a truth the container spin me up in my sequel container and please use the same password for both As soon as you start Running Docker on multiple machines things get complicated. That's where Docker compose can't help you anymore and That's where orchestration Tools like container which I'll be talking about will help you container says of themselves container is an open source container platform builds to maximize developer happiness works on any cloud easy to set up simple to use and That's true Container is simple. It's inexpensive. It's full feature It's production ready. It's a cure. It's quite flexible and That's why I think it's the ideal tool to start running Docker on multiple machines There are other competitors well-known competitors like Kubernetes from Google or or Messos for example, but these are huge things and that's a bit like well If I'd like to start say baking and I'm looking up a cupcake recipe I Don't expect the recipe to start with First let's build an oven because that often will enable you to bake 300 cupcakes a day Well, I only have two children so these bigger Orchestration platforms are great if you are running at the scale of Google But if you just want to start and go beyond what the basic Docker binary gives you I think they are Container is exactly the thing in the middle and the cool thing is investing time here isn't Lost because the Magic sauce is in your Docker images how they are built how they are Enables to interact at all these things and you will you will be able to run the same images on Kubernetes later As you are running them on container today container is easy to install It'll take you say 20 minutes To spin up a basic container cluster for the first time if you're using a cloud platform like It comes with everything that you need to start and If you are already familiar for example with Docker compose you can Use the same files and simply add a few Container specific things and you'll be able to run on multiple hosts as open source software container is very inexpensive and For example, it also supports let's encrypt. So even securing your website with SSL will cost you nothing It has everything that you need to get started You will have you'll have a private image registry so you can store your own Docker images Locally on the platform It comes with a load balancer that couldn't be easier to use It Supports what's called service discovery Which means as soon as I spin up say a memcache container somewhere this central registry Has this information and for example my my Docker my Drupal container will get this information And I could add it on the fly It also has a secrets storage Where you can store all your passwords and things like that at a central place and simply say Okay, dear my SQL container use this password for the database Drupal and I'll tell my Drupal container. Well dear Drupal container use the exact same password and Even if that password changes since both containers are referencing the same secret There's never a disparity And the same mechanism that does the service discovery can also act as a key value store where you can store arbitrary data From one container and make it available to others. You can also use container introduction as we do it Has user authentication and roles that let you express who has access to what? It Offers you health checks Beyond is this container running or not? So for example for a web server container you can define a HTTP health check that can actually check is not only if the container is running But also if the website is actually reachable and These health checks will be used both by the shadowing part that decides if it needs to spin up a container And the load balancer that has to decide Am I allowed to direct traffic to that container? It supports stateful applications for example my SQL needs to store its data somewhere and it doesn't make much sense to Start another my secret container somewhere else if the original one goes down because it won't have the data so container knows okay, there is a stateful container and We can only spin that up where the data has been stored initially the Drupal container for example won't be stateful in In the best case so we can spin that up at any place like As soon as we have a shared storage somewhere It also does logging and gives you statistics and it even has an audit trade that you can see who Started a container where what happened when something went wrong. It's also secure containers always run in What's called a grid I come to that later and these grids are virtual networks with their own private IP addresses and they are connected by basically VPN connections that are encrypted so that's why it's a cure and with a simple Open VPN client you can connect to these grids to access say the private image registry But without these credentials no one will be able to talk to these containers Except of course if you expose them to the public which you would do with your Drupal container Which you probably won't do with your my secret container container container is also Quite flexible in terms of where you install it My demo will use digital ocean, but you can spin up container on Amazon on Google Microsoft You can install it on your own machines using simply their Ubuntu packages So you can do even hybrid things like Running half of your infrastructure on Amazon on and the other half on digital ocean if you like or on premise so Just a few words about me. My name is Jochen. I'm the founder and CTO and CEO at Frysteel IT We've started in 2010 specializing in Drupal hosting in 2013 we added WordPress and our hosting platform is called Frysteel box Frysteel is the German word for freestyle, but is pronounced exactly the other way around and For those that can't remember that we've recently added another domain So you can simply edit enter host ng.co in your browser and you'll land on our website as well Now let's get into how to use container The architecture is very simple. You have a central container server that manages everything and then you have grids of container nodes and Each of these nodes will run an agent software that talks to the server Which is quite convenient because since the agent connects to the server and Keeps that connection open you don't have to have Open firewall parts for your nodes You'll simply have to Be able to connect to the server access to the container server to the central part is managed via OAuth You can use either the container cloud that is run by the Company that that develops container Container cloud is basically only an OAuth provider that you can use to to Log in on the web a little past the OAuth tokens to your Container master server That's all the container cloud does at the moment. They are in the process of extending that and Alternatively, you can use external OAuth providers that you Run yourself Creating your container server, which you only have to do once is easy with The container command line tool You'll simply tell container. Okay. I'd like to use the Digital ocean plugin to spin up a master server. I Name it container DCL for Duplicate London use my digital ocean API token. I choose the Digital ocean region London one with my SSH key the droplets Should have the 1 gigabyte size and use the container cloud for authentication And it'll take about say five minutes for spinning up a droplet at Digital ocean installing the master server and after that You have your central piece That's all there is in this process a first grid will be created named test Automatically and you can start as many grids as you like Grids are simply groups of container nodes, which we now have to create they use an Overlay network a virtual network with its own IP addresses for the containers to communicate and Of course if you expose certain services to the public via a well-known port They'll use the public IP address of the host If you need to talk directly to the nodes, you'll have access via the VPN. You can simply Run the command container VPN config I think and it'll write out an open VPN configuration file that you can import and you're good to go that's how you create another grid and From that point on All the container commands will act on this grid So I say okay, I'd like to start a grid. I'll start with two nodes. I can add more later and I Name this grid to the grid now for the nodes These nodes are discovered automatically as soon as you add a new node. It'll connect to the master server So it comes becomes available and the master server will start using it And that's how you spin up the node quite similar to the master server You create that you'll have to use your digital ocean API token again provide your SSHP and The digital ocean details here. I've already done that Let's see if we are still So there's two nodes both running on Digital ocean and I've called them container demo node one and container demo node two Now let's get cooking container services a service is basically a container That you'd like to spin up. So you have to define the image that should be used An engine X server, you'll have to define there are volumes Fine storage directories that this container should expose you can define the resources it It gets namely CPU shares and memory If you don't want to limit that it'll just use what's there You need to define if it's linked to any other container So are there containers? I need to talk to you define the environment variables that will be used inside the container You can provide secret secrets also as environment variables and As soon as a service is deployed so run It will automatically register with the master server and everyone who wants to know Will learn that this service has become available. So I just mentioned deployment You can also define for each service the deployment strategy normally container simply takes the container and runs it and It likes but you can also say I'd like to run this container next to another So for example, if I have two containers that share data volumes, they need to run on the same machine and That will be deployed accordingly. You can also do something what's called demon deployment Which means this container will be run on every node of your grid So for example, you have a monitoring service that needs to Check all your containers. You'd like to run that on every node, of course So every of your containers coming up anywhere will be monitored and you'd probably run that in demon mode You'll also define the number of instances for example, if you are in a redundant setup, you'd probably want to run your Drupal web servers in more than one instance and What's quite nifty is you can define a port to wait for so For example for web server. You can tell container. Please wait until port 80 is actually reachable So for example, if I do an image update I have a new Drupal image that I'd like to deploy so every container needs to be stopped and Restarted with the new version of the image container will actually stop your first Drupal instance Spin it up with a new instance wait until it's port 80 is actually available And only then go to the next Drupal container and destroy that and rebuild it So you'll have zero downtime That's how you start such a service. You simply say, okay, I'd like to use the engine X Engine X image in the latest version to run a container named engine X and please expose port 80 to the public That's exactly what we're going to do. I've already prepared something So now I've started my engine X service. No, I haven't started it. I have to find it So I have defined the engine X Service and as you can see There are zero instances of that at the moment. Let's change that By using the deploy command Now it'll actually spin up engine X On one of our nodes it might have to Download the latest engine X image So I've already run that But if there had been a an update of this image, it has to download it first so let's see It ran the command Container service show Engine X which shows me the details of the service and I got all the things that have been Defined by default because I didn't mention them in service definitions. So it'll only Run with one instance scaling one. It'll automatically choose the HAA strategy It'll expose port 80 and we have one instance of engine X running This is the private IP that Container always uses each grid uses the 10 81 network since they are all disparate can use the same addresses in every grid And the node that the container is running on has the public IP 46 and so on so on this On this node, I should be able to access engine X Let's see should automatically reconnect. Let's break the time. Any questions so far Because the number of nodes is Defined by the number of nodes you spin up in the grid the number of container node 3a commands So that's the public IP I've just grabbed that out of the service show Output and if I use that IP is 46 101 6139 I'll get the default page that engine X Is you could even try that on your own web browser So that's how easy it is to spin up a container without Having to do much. That's what I mean with container is really simple know first and You just have to have an image which I've taken from the public Docker library and Things are running already That's how you create a state full Service or for example if I want to run redis which Stores files on the machine. I'll better define that as state full So that I don't lose all my redis data any time the containers spun up somewhere else You simply add the option that are stateful and I don't expose any port here That happens automatically if the image provide the port and I Can actually access that server right away because each grid has its own internal domain name and The service name is just prepended to that so I can I could for example talk to this container using the name redis dot Do the grid dots So it's just composed out of the service name the grid name and the top-level domain container dot IO and These names are all grid local. So they can't interfere with each other And that's how you scale up a service. So I've spun up the service on a single container and I can simply issue a Container scale command Now I'm scared it to tune to Instances so it'll have to spin up another instance additionally to the one I'm already running And if I then issue the container service show for engine x I'll see That I have two instances Running on different nodes So if you compare the public IPs these services are actually running on different nodes That's that now, let's get Let's do another step Single services are fine, but for example to run Drupal. We need a lot of these we need the Drupal service The PHP part we need my sequel and you might need additional things So that's where container stacks come in which are basically combinations of services And that's what I mentioned earlier if you are already familiar with Docker compose That's what the Docker compose does locally on your machine where you can say okay I'd like to spin up a true to container and the my sequel container and they need to talk to each other in the multi-node setup Container does exactly the same without you having to do anything extra Services are defined by a YAML file like with Docker compose. They use their own DNS domain, which is now stack.grid.container.io and The services get pre-printed to that And stacks are versioned so you can update your service definition and even store that in a central registry and Use certain versions that you've defined It's also quite easy to to install a stack Here I simply define a stack named Drupal from a YAML file Let's take a look at how that looks It starts with the name of the stack and the version and then I start to Define variables that I have to use later For example, I need the mysql password at several stages and I'll simply define my passwords here What we're doing here is Defining a variable named DrupalMySqlRoot, which is a string And for each variable I can define both where its value comes from and where the value goes What I do here is offering two alternative sources so First step would be Take a look at the container vault where the secrets are stored and look for a key named DrupalMySqlRoot And if you don't find anything here because we've never spun up this stack then simply simply create a random string of 32 characters and Where ever you got that value from stored it in the vault under the name Root and we do the same for the Database password that Drupal will be using We'll first need the root password to create the database and then we'll create a user using this mysql password To access the database and then we'll simply define the services So first here's the Drupal service. I'll define which image I'd like to run Drupal stores files, so it has to be stateful with the expose port 80 I'm defining a simple and violent variable mysql user named Drupal That's what the container image will be using to access the database And I use the DrupalMySql password I've defined before and Assign it to an environment variable named mysql password, which will be used by the container The container by default uses the host name mysql for the database and I don't have to do anything about that because since my Database service as you soon see is named mysql that host name becomes available automatically And then I'll define a few volumes because I'd like to have these directories Stored outside of the container so They don't vanish with when the container is destroyed for example And as mentioned here's the second service mysql It needs to be stateful too I'm defining two environment variables here Drupal as the database name and Drupal as the user so the container image will automatically create a database named Drupal and it'll grant access to the user Drupal and It'll use the same secret Drupal mysql root to create the database And then it'll use the secret Drupal mysql password to grant access to the database So let's see how that works Let's remove the nginx service first and then deploy the Drupal stack First it's deploying a service Drupal LB, which is the load balancer. I'll come to that in a moment So now it spins up the mysql service These are all dependencies of the Drupal service, so they get deployed first And now finally Drupal which is the web server part is deployed as well. So now we have three services interacting with each other Now let's get to the load balancer part first. I'll come back to this Load balancing normally is quite a complex thing because you have to spin up a load balancing service Which needs to know all the back-end nodes and how it's going to distribute requests between these nodes and stuff And container takes all of that away That's all you have to do To add load balancing to our previous stack definition You add a new service called Drupal LB using the LB image provided by container Exposed its port 80 That's not exposed by Drupal anymore And then you say okay, I'd like two instances of my Drupal container And I'll define four additional environment variables that define how the Load balancer should behave and simply by linking my Drupal container to the Drupal LB container The Drupal LB image knows where its back ends are And it configures itself automatically and if I add another Drupal instance the load balancer will again automatically Change its configuration to Distribute between the three services. It couldn't be simpler. So let's see how that works our stack has been deployed and installed So if I list my services now the nginx service has gone and we have a Drupal service running on two instances I have a mysql service running on one instance and I have the load balancer running in two instances as well and if I grab the service Details of my Drupal service I get the two IP addresses of my nodes there are only two nodes so There's not much choice there and if you try to access these one of these IP addresses You'll you'll see a newly installed Drupal 8.2 You can I can try and copy that As any one of you the IP on on on his machine here, let's see Wasn't that easy Two-node Drupal cluster And I I didn't do anything extra that you didn't see the only thing I actually did in front of the Talk was spinning up the master and the two nodes because that takes about five or seven minutes and adding as a cell To that which I haven't done Is easy as well. You simply need to register your email address if that's improved You request a certificate for say You'll receive a DNS Entry that you have to enter in the DNS of your domain in that case exam.com That authorizes you Authorizes let's encrypt to issue You a certificate for that domain And then you simply say container certificate get www.example.com It'll get the certificate from let's encrypt store it in the container vault And then you simply add one two three four five six lines to your load balancer configuration To use this certificate Um And switch from port 80 to port 443 and that's it So in summary container really is simple It is easy to install and it's a really Ideal next step after you've done your first Steps with docker at all. It's inexpensive. You can test it any anytime you like It has all the features you you'll need for the coming weeks and months It's ready for production. It's secure It's very flexible in terms where you set it up And It's really worth a try Thanks Container actually doesn't yet offer persistent storage for containers in the sense that You can store data and that data Follows the container around that's why you need these stateful services to make sure once you've Stored some data on one of the nodes the container will be spun up there Always every time There are services that can do that, but they are much more complex um and There are Things you can work around that for example um I cheated in a way because I while I've spun up two Drupal containers If one of these containers would store something in in side default files It would only be on that node and the other node wouldn't see any of these files Um, they I guess one of the most simple Workarounds here would be to add a BitTorrent service That will simply share a volume with your Drupal container and Sink this volume In in a local BitTorrent Network so by spinning up a BitTorrent container with every Drupal container Um and having these BitTorrent nodes talk to each other Files that would be written by one container would automatically be Sinked with the other containers. Of course, there are more complex solutions as well like Shared file storages and things like that, but I find the the BitTorrent idea quite quite appealing And it's fast as well. So you could use that actually Um, so You're right in in this configuration. I only have one master and if that goes down um I can't spin up new services or new nodes and things like that. So I'd be impaired The what the master doesn't do is for example doing the service discovery for that container Installs an atcd instance in every grid And as long as that runs your service discovery will work fine at your key value store as well The container master has Quite a simple setup. It's the master software Which is a container by itself and it uses MongoDB as its persistent storage And container has a short documentation document that explains how to make that idea You simply have to have a MongoDB cluster and multiple instances of the master application Further questions Autoscale Based on what? Why they use feature in amazon web services react instantly Do a slash dot I know what you mean Well, the the simple answer would be If you'd like to rebuild amazon, I wouldn't use container So container doesn't offer anything like that out of the box You could get creative again and think of something that simply would issue container node or container container scale commands based on say incoming requests number of Apache workers or any other metric But it doesn't offer anything out of the box for that because Things get quite complex when you go that route Especially if you don't want to pre deploy nodes Simply issuing a scale command like I did with nginx earlier Wouldn't be a much of a problem. But if you'd like to say, okay, I only have two nodes now But I need to scale up to five nodes so I can reasonably spin up five Drupal containers That would Need quite quite a bit of orchestration as well because what happens if a node doesn't come up What happens if the service on the node doesn't come up and all these things And amazon Invested quite a bit of engineering time into that problem And you'd have to have to so if you have problems in that region Maybe you should Try Get getting started with container and then move on to something like kubernetes for example Anything else? Thanks for coming there