 Okay, hello everyone, so tonight I talk a little bit about using Docker on AWS, just a little bit about me and my company. I can't hear you, or you can call me Chris. I was software engineer at Py, so domain Py.co, so we make chat application for work. And there I work primarily as the back-end DevOps guy, and my contact details are there. So Py is a startup with a very small unit, about 80 people right now, so we make the chat application for work. So it's like a WhatsApp, and what it does is, yes, multi-device sync across all your devices, and also an Overwatch application as well. And the best thing is we build Py using Py itself. So how do we make an application like Py? We need, well, basically all these regular things you need for reliable, high availability chat application. Also a fair offer, and also a rapid product iteration with zero downtime deployments. So AWS provides all these things, rapid infrastructure provisioning, look balancing across different availability zones, and auto-scaling with reliable and performance data source. So backups and fair offer as well. And this is the architecture that we came out with. So we make heavy use of AWS features. Our back-end database is a Postgres, it's also on RDS. And we make use of load balancers, both internal and external as well. And everything is hosted within virtual VPC. And our front-end is actually served using CloudFront, and it's backed using Amazon S3. So it's a single-page application, which calls an API that relies on RDS. And we also have service cluster, which contains all our back-end workers, which run past their pastry by a queue system. So we have background workers, a queue system, and basically all load balancers across different zones. And the choice of technology we use is Docker. So I guess there's just a simple introduction. Docker is a glorified and secure gel. But I think this does not really do it justice, because that's more to it. It does image management and data body management as well. So what it does is you can package your application into a Docker image, and then push it to our central registry. And this differs from virtual machines in that Docker images are very lightweight. And when you run them back, you're essentially just running a process which is isolated, using CHroot, using Cboots and so on. So much faster start times, much easier management. So before, in order to manage your hosts, you need to have specialized server provisioning, which are susceptible to configuration drift. And you may have conflicting dependencies, especially if you have diverse workloads across different hosts, for example. So when you use Docker, you can actually simplify your infrastructure into a homogeneous host. And you have minimal provisioning using cloud configs. So you only need the Docker agent or the Docker daemon to run on your hosts. And this gives rise to immutable infrastructure. So if you need to update or maybe your host OS, for example, then you just lower away your machines and then just spark new ones. And Docker containers provide you dependency isolation. So that means developers can now maintain application environment themselves and then just deploy your production using Docker containers. So usually in development or staging, you have one single host and multiple Docker containers in production, as seen from the earlier architecture diagram. You have multiple hosts which have different roles, for example. For example, the service containers in the back and then the API host in the front. So how do you coordinate your Docker containers across this entire infrastructure? So there are several solutions. The one that we use is CoreOS. So CoreOS is a specialized distribution of linux, which primarily provides tools used for container orchestration. So CoreOS comprises these particular tools, ETCD, FLIT and season B. So FLIT, sorry, ETCD is used as a synchronization layer. And FLIT is used as the orchestration. So it provides commands to the system need to run Docker containers to manage the container life cycle and so on. So our approach is to have central services and worker clusters. So we don't have nearly as much hosts as we've been here at. The idea is we have a central FLIT cluster and based on that FLIT cluster we have a series of workers so which run the main workload. And the FLIT cluster is to provide a synchronization on so central services such as our queue system and so on. And both clusters can be scaled separately. So in all the scheduled units, units are written as system D units. So these units have FLIT specific metadata. And this metadata can be used to designate certain hosts with different roles. For example, you have an API host or you have a service host for example. And they will run different kinds of service units. So this is an example of the output they get when you list all the workloads in your CoreOS cluster. And we also run CI CD pipeline. The use of Docker has heavily changed the way we do CI CD. Whenever we push to the Git registry, sorry, whenever we push to GitHub, these triggers test and build cycle in our CI server. So we use Circle CI as an integration point. Then when Circle CI finishes building the image, then it pushes it to the registry. And this is an example of the output. So we actually test using a Docker container. So this is to simulate the CoreOS environment as much as possible. So whenever we run tests, we actually build the Docker container, run the tests inside, and then only then we push it to the registry. So this is the entire process. So whenever we want to kick off a new build, first we actually modify our own PyChats application. So this connection is now using Kubot. We actually make use of the bot which receives commands through chat, and then it modifies Docker to pull the registry, pull the deploy container. So everything is going in the right. So we have a deploy system which uses nCivil to run a series of commands. And this entire nCivil container is then pulled from the registry, and then we then run the entire series of commands. So we modify feeds, then we need to deploy a new build. And this is synchronized across the entire cluster. So the cluster will now pull the new container from the registry, and then switch up the API with the new versions. So I'd like to show you a bit how we do it, because this is all just talking, so probably expected if I show you as to what happens. So this is our chat application. And this is a chat room where we actually interact with Kubot. So what happens when we issue the command? And then we actually see what happens behind the scenes. So it's actually pulling the deploy container that is running the migration scripts, and you can see the nCivil outputs. So it already happened. So basically now the containers have been updated, then we switch with the new versions, and it runs the post-deployed migration, and that's it. Then it sends a message to the channel to say, okay, keep on going, it's done. How come you did your... Oh, right, Huber into your chat application. Where was the interface there to kick off the job? Okay, our Huber is actually running inside the Docker container, so it has a connection to the chat server. Okay, so it's like a chat, a member of the chat room. Yes. So this is how we coordinate a lot of our different operations as well. So we just run this robot. Just now you saw all the logs coming through our paper trail. So that's how we do our loading setup. For loading a matrix, you have region containers that hook into the Docker API to collect all the stats and all the outputs. So there are a few projects that facilitate this, for example, C-Advisor and EPSter by Google. Or you can also use Datadog and Scout monitoring containers. Also, the new versions of Docker, 1.6 onwards, they add logging drivers that let you redirect the container output to, for example, C-Sync. So this is what happens. We also have a matrix setup. So we actually send matrix from our different containers into an InfoxDB database, and then we just apply a graph on our way to create graphs. So this is what our graph on the second looks like. So we can see just now there's a deployment, and it's shown on the top. Yeah, so that was the data just now. There are some things that Docker doesn't provide. So these are some of the questions I receive quite frequently, like how do you migrate data containers? For example, if you have a cluster that contains a matrix to a different server, so how do you ensure that your data goes in there as well? So these are some of the things that Docker doesn't provide by default. So you have to make use of external software, for example, for Docker. And also there are certain features that people want to overlay networking, like how do you address each container with a certain IP? So let's purchase our plan on that, let you do that. And so there's a question on the software. If we want to switch to, let's say, Amazon ECS. So Amazon is a service called EC2 Container Service. So what are the parameters to it? So we use CoreOS. We use the split for synchronization, sorry, not split, but ETC. So now the synchronization is provided by AWS API. So you send commands to the API and then you coordinate the cluster for you. And the system of fit is taken over by the ECS agent container. And if you start writing in your files, you will write ECS task definitions. So this is how everything goes together. OK. Let's end it. Any questions? Is there images in Docker? How do we have a private registry? OK. We actually use a private registry, but the issue was that the private registry had created quite a lot of issues. Sometimes they had errors like mysterious status codes when trying to move on it. So we switched to the Docker Hub. And which was not a highly solution as well because the Docker Hub is also rather unreliable. So we are actually looking into ways to solve this. Somebody actually came up with a solution that is to bypass the registry completely and just dump the images onto the registry. And with all the JSON files that let you rebuild the different types of endpoints, simulate the endpoints of the registry as well. But changing the point is, I studied it for a little bit. It's kind of, I stopped for a while because I'm still trying to solve the issue of what happens when we are trying to do a revolutionary API in zero downtime. Because what we do is a little bit unconventional in that we actually have ATX containers in front of our APIs. And we also have ConfD, which actually monitors the ETCD data score. So whenever we do a switch, we skin up an additional container and then we do the switch of the Pops in GINX. Only then we spin up the other container. So I'm still learning how to do that with ECS. So you do have the ECS? Yeah. Okay. Alright, thank you very much.