 Hi. Welcome everyone. My name is Sergei Shishkin. I work for Toralytics, which is a data science company in Singapore and Zurich. And we started using Docker quite a while, but for small things, and recently I started using it more for deployment. Previously we did a lot of things with Ansible. We're still using Ansible extensively, but more and more I find ways where I, as a developer, as an engineer, not as an operation person, can build something which is easy to deploy and also complex in a way that it can be deployed in a cluster environment, not just a small single application. And I discovered for myself a tool outside of just the single Docker executable, which is Docker Machine. I'm not sure if anybody of you are familiar with it. Did anyone try to run Docker on Windows or Mac before Docker for Mac came out? Yeah, so you probably used the Docker Machine. So back in the time, and there is still no kind of native support outside of Linux world for Docker. So everything outside of Linux has to be somehow virtualized, and previously it was a virtual box on either Windows or Mac, where you would have something that manages a virtual box. The virtual machine is able to install Docker there, and then you have this somewhat seamless integration, but you still have to care about IP addresses and stuff, and networking is a huge pain in this scenario. Then Docker for Mac and Docker for Windows came out and I thought, oh, now I don't need Docker Machine anymore. I can just use Docker executable and I'm fine. And it was, of course, as promised, a very nice product. But then I figured out that, well, if I want to run my application locally on my Mac, for example, test it in a Docker environment and I want to deploy it somewhere, how do I do it? And the most straightforward way for us, at least with Ansible deployments, was to, okay, so we provision our servers with Ansible, and Ansible would be responsible for running a Docker daemon for pulling proper images, starting Docker applications in a daemon, or we would use, we actually don't use worm yet, or any type of Docker cluster management, unfortunately, but I look forward to hear about that and use it as well. And Docker Machine seemed to be quite interesting. So I'll demonstrate how you can use Docker Machine in your workflows and Docker Compose as well a bit. Who is familiar with Docker Compose? Who is not familiar with Docker Compose? Okay, I see. That's the trick. So Docker Compose is an additional tool in the Docker toolbox, which is a separate tool written in Python and automates things around Docker. And what it does, it helps you build applications out of multiple services that compose together. That's why it's called Compose. And it solves linking of containers in this virtual Docker network. It creates a network. It also creates volumes for your containers under the hood. So even if you don't need multiple containers, even if you just have one container which you want to run locally or remote, it's still a great way to get rid of this huge Docker executable command lines where you specify all the flags by basically including them in a Docker Compose file. So a basic Docker Compose file looks like this. You specify your services. In this case, there are two, VAP and Redis. VAP is built out of a Docker file which is in the local directory. That's what the dot for build specifies. It exposes a port 5000 on a port 5000 on a host machine. It exposes a volume or maps a volume. Actually, two of them. It links to Redis container. There is a Redis service as well built out of a public Redis image, the latest version. And so that's all you need to have this application, VAP application using Redis. So you start with Docker Compose and you have Redis and you have VAP application. They can talk to each other. Redis is not exposed outside of your specific Docker Compose network. Because for each application, Docker Compose will create a separate network and unless you expose any ports, they will not be exposed outside. But every container in the composition can still talk to every other container, which is very nice. So what we'll take is we'll take the Docker Compose example, which is a bit of an involved example. It's a voting app where you can vote for dogs or cats. So it has a voting app. It's a Python VAP application which accepts votes, publishes them to Redis. Then there is a Redis as a standard Redis container. Then there is a worker service, which is written in .NET, which listens on Redis updates and updates the results in Postgres. And then there is a Node.js application, which pulls Postgres database and displays the results. And it's a bit reactive, so it actually works in almost real time. But I didn't look into the source code much. That's what Docker is for. So this is how the Docker Compose for this particular application looks like. So there is a vote application, again, Python built from source, has volume, has ports. There is Redis, exposes a port. Although it's not strictly necessary if you would omit this line, everything would work just fine. There is worker, there is DB, and the result application is Node.js. So I've cloned the application here, and the normal way you would use Docker Compose is by just saying Docker Compose up, and it will figure out everything it needs, pull the necessary images, build the necessary images, update any applications, run them, and you see this. So if you run it with an interactive terminal, so without the dash D flag, without the detached mode, it will attach to the logs, and you see on the left it denotes which log line comes from which service. Sometimes they can be messed up, but unless you have ASCII art like Redis, it doesn't really make any difference. So here is the application running. Let's actually go and check how it looks like. Yes, it was port 5000. Yes, so we have this application. We can vote for cats and dogs, and on the port 5001 is the result. So you actually see if I change my mind, it updates automatically. So very simple stuff. Now I'm an enterprise developer and I want it to be deployed in the cloud. How do I do that? Here's one way. Because Docker has this detached architecture that Docker demon exposes a REST API, which Docker command line is actually talking to. So the Docker executable that you're running on your command line is actually a REST client which sends commands to a specific server. And then the server executes the commands and sends the results back. And this architecture allows for very easy remote Docker management, and that's exactly what Docker machine does. So now I have an EC2 instance running on Amazon cloud. So there is nothing specific to AWS. So Vincent will talk about how AWS specific integration is done. For these purposes what I need is I just need some remote server with SSH connection. So what I do, yeah, so this is the SSH connection here. What I do is let me stop this instance here and show you what I have in Docker info. So if you look carefully, you see that in this particular Docker instance, my operating system is Alpine Linux, which is the host operating system that the small hypervisor on Docker for Mac is running. So this is what happens when I run it locally. Now this is a totally different case because previously I've created a Docker machine here. So I'll show you what I have here. So I have a test Docker host which is running and I've created it the following way. So Docker machine supports provisioning or provides provisioning of Docker hosts on remote machines. And it has several providers. So one of them is generic. You see driver generic here. For that provider, all is necessary is to specify the IP address. In this case it's DNS identity. Provide SSH user, provide SSH key and name the machine. And it will go and create it. I'll spare time of not showing how it does it. It does it. I just did it five minutes ago. So now let me show it this way. So now I can have an environment that is needed for me to use this Docker instance. So if you run Docker machine and then the name of the machine it will write all the experts which are needed. And then if I evaluate them then my environment, my command line now is able to talk to the remote Docker demon. So I do that. And I guess it's still running. Sometimes it might be a bit flaky. But if I run Docker info you see that operating system there is Ubuntu which is the AWS instance I've created. So now I can run just normal Docker stuff here. See which images I have there. And it's really the same Docker experience you have locally with the remote machine. But we're interested in Docker compose. So the same way I can Docker compose up this application in the cloud or on the remote machine. So actually I hope it does work. Yeah, I think they should be open. Yeah. Oh, sorry. Yeah, exactly. I tested it at home and for security reasons. Oh yeah, now it works. No, no, it's just firewall on AWS. Yeah, but it was done by you. You can only do it via AWS or you can do it all the AWS command line. Yeah, sure. I mean, it's not done by a Docker machine. No, no, no, no. It's nothing Docker specific or Docker related. Yeah. So the same stuff here. Go to 5001. Yep. So beyond that, let me stop this. Docker machine supports regenerating certificates. So what it actually does when you specify certificate, the private key the first time you connect to it, it will create a certificate you need to open the port that the Docker demon is listening on. Actually, which the rest API will work through. It's secured where the certificate client is authenticated by certificate as well, which is why the regeneration is needed. So it's not anyone who knows the address of your machine is able to talk to your Docker host on that machine. Of course. Not anymore. Yes. Okay. Yeah. So you have one active Docker machine per your terminal session, which is, which is obvious. And the rest is so when you actually do Docker machine create. So you see a lot of options around swarm as well. So I didn't test it. I hope it works. It will be really interesting. But so now how I see our deployment strategy evolving is that we will still use something like Ansible to provide like the foundation of infrastructure to well install patch level of operating system and everything like that. And then when a new machine comes into fleet, it will be provisioned by something like Docker machine can come into swarm. And then from there on we can have automation on continuous integration or continuous deployment where Docker itself is used to publish applications and upload applications. So if you're interested in Docker compose, it's also very powerful tool. It's basically almost the same command set as you have for normal Docker, but no low level stuff because you're not talking about things like volumes and networks anymore. You specified them in your composition file. You are talking about services and applications. And then you manage them with this command set. All right. Do you have any questions? That's all I have. Oh yeah, before we go there. We are doing some interesting human behavior analytics and we are hiring. So if what I've just explained to you is nothing new to you and you know more than you definitely should talk to me. Yeah, questions. Different containers. How do you aggregate logs from different containers? How do you have aggregation from different containers? So you have the standard Docker log collection from standard out. Okay. That is Docker containers logs, but how about the application logs? Of course, each one will have a different scale things up. Yeah, so. So in that case, I need to have centralized repository where I can collect application logs. It's not containers logs, application logs. Well, if your application is logged on standard out, then your application logs become just container logs. And usually in a container shouldn't have more than one process anyway, so more than one application. So there shouldn't be anything beyond your application running in that particular container anyway. That's one way. If you are fancy, you might look into something like gray log or elk where you can even deliver logs by UDP or TCP that will work the same. And probably the least preferable option I would say is just mount a volume and log there. But then you have to manage all those volumes and log files. Maybe I can add something. So in Docker, there is the concept of log plugins like log providers. So what you mentioned about gray log, there is a plugin. So basically you just need to run a gray log server or plugin daemon. And then the Docker engine will then, if you tell your container needs to use that plugin, then the engine will integrate your container with that plugin. So it will automatically get sent there. So your logs are then going through a log forwarder and then centralized tagged however you want. One of the trade-offs I see is that if you log to standard out, then all your logs become text. So you basically have just lines. And then Docker will take those lines and log them. And then on gray log, you will have to somehow parse them back. However, if you have specific libraries for gray log data log format, then you can log structured, log events from your application. And then I would go with the networking option where you just directly send them to the collector from within the container. Any other questions about Docker machine, Docker compose, anything you want to see? So before Docker for Mac came out, Docker machine was a way to use Docker on non-Linux environments. So where you would use, so Docker machine has different providers, machine driver. So these are all the drivers that it supports and one of them is virtual box. So you would just Docker machine create driver virtual box something and it still works. After that, you have a greater level of control than if you are using Docker command line? No. After that, your Docker command line is pointing to the REST API which is hosted in a virtual box that Docker machine provisioned. But then you run and it works. It's actually very, very nice solution. But you run into issues where you, for example, you read some guide on how to use Docker. It says Docker compose up, you run it and says go to local host port 5000. And it's not on local host because it's on the IP address of your virtual machine. So you need to be careful with that and networking is a bit more complicated in this scenario. What else you have to do? So we saw that you need to adjust the ports so that it's open, what else do you need to do? Or what, maybe add civil is have to do to have a working setup? On the port side, so 2376 is what Docker is listening on. So you need to open that when you provision with Docker machine. Everything else is Docker machine taking care of. You can read the generic driver documentation to see which operating systems are supported. Not all of them are supported, but most of them are supported. But Docker machine will even install Docker on your host. So really I just created bare bones Ubuntu instance and that's it. Nothing more. So normally you put one application per instance. So in this case you put three applications on one instance. I wouldn't say you necessarily put one application per instance. And again, things like swarm and Kubernetes and all those clustering solutions are about making containers portable. So it doesn't really matter on which instance they run. So if instance has enough memory, why not put more containers or less containers? It's one service per container, but you can put as many containers as you can on one instance. Toro Compose has a scale. It can be a bit less. It can scale more than one instance per service. How does it work with Fab? Does it do with load balancing? How does it know which one to forward the request to? Or do you have to do that manually? Yeah, that's a great question. So you have to do it manually in the sense that Docker doesn't, as far as my understanding goes, doesn't provide you anything around that. What Docker Compose will do, it will just create more container instances. They will have their own host names. They will expose their own ports. So that's why you have to be careful how you map the ports on the host, whether you map them at all. And you can use solutions like HA proxy. And the one is even... I saw one which supports when it runs in Docker and you say to it or you need to monitor these containers, it will be able to resolve them. So in the legacy mode of Docker Compose, in the legacy Docker networking, there is no service discovery or anything like that. But in the newer version of Docker Compose, version two, when you create networks, the Docker engine actually exposes a DNS resolver inside the container on port 111, or some port that I don't remember exact port. But it means that if you connect a container to a network and you scale it out, all of those containers can be found via DNS round robin. So you can actually, with Docker Compose stack locally, create a full load balancing solution with DNS round robin on Docker networks. But you need to specify the Docker Compose version two and you need to specify special networks. And your container should not connect to the default Docker bridge, but they should connect to the networks. And then you use networks to isolate the layers. But you have DNS round robin there. So like sometimes I want to recreate a complex situation on Amazon where I have an ELB that does TLS termination. Then I have my Node.js server and I have like an engine X load balancer in front of it or something like that, or reverse proxy engine X where I have virtual hosts. So I can simulate the whole environment using Docker Compose locally with TLS termination and upstream DNS round robin across hosts. So I'm able to test and change my engine X configuration and see if it's going to work or not, like how are the proxy forwarding headers. So far I've been able to reproduce very complicated setups just locally on my laptop. Does this use a swarm or something or is it different? Because swarm, based on my understanding, swarm is actually for clustering which is making it more elastic. So does it use a swarm? Well, I showed didn't use swarm, but swarm is supported. It's coming in. Yeah. But again, so what you get with Docker machine is provisioning of Docker host on a remote machine and then redirecting your local environment so that your Docker command line tools now can talk to that remote Docker demon. And then everything, all other tools that built on top of Docker will work just seamlessly. Well, unless they make any assumptions on IP addresses or local volumes or things like that. Guess we should go to the next. I guess there's more questions. The questions are important. Maybe we can see the questions after this one. Yeah. Okay. Thanks. Thank you.