 Hi, I'm Ross Gardler, a Program Manager on the Azure Compute Team. You know, the portability, small size and rapid start-up and tear-down times of containers is bringing an agility and flexibility to all phases of the development and operations lifecycle. That's what I want to talk about today. After a short introduction, I'll dive straight into a demo. I'll show you how a developer might use containers locally to provide an approximation of their production environment right there on their dev machine. I'll make some edits to the application, test them in the simulated environment, and then push those changes to the development repository so that our co-developers can work with the latest code. We'll then look at how the Azure Container Service provides an easy way to build and manage a staging or production environment for our containerized application. This environment will provide a scalability and availability far beyond that of our single machine developer environment. By the time we finish discussing this service, we'll see how our developers' changes have gone live into an Azure Container Service cluster. Let's take a look at the process we're going to see here. Like all application development teams, we will start off with one or more developer workstations. Here, our developers will build and test their code. Once happy with it, they will push the code to version control. At this point, co-developers on the same team can retrieve their changes to ensure that their own work is aligned. Whenever a new code is committed to the master branch of the repository, a continuous integration service pulls those changes and rebuilds the affected containers. Integration chests are run against this new configuration of the application, and when tests pass, the new containers are shared via a Docker registry with other teams. This means that our development teams are always working with the latest development versions of containers produced by other teams, whilst also working with the latest development code from within their own team. As a result, any bugs introduced are identified as early as possible and therefore resolved more quickly. When new versions of containers are made available in the Docker registry, a continuous delivery system will manage the rollout to staging and production environments. This may be a fully automated process, or it could involve control steps in which operations teams verify the deployment pound at key stages of the operation process. Now that we understand what the workflow looks like, let's take a look at it in practice. Here's our demo application. It's a really simple Hello World application. It consists of three containers. The first container is a PHP container, and it returns this part of the page. Notice it says Hello Connect World from the PHP web application. It tells you the host that it's running on and a title here. And then this little bit at the bottom, the PHP app calls a Java REST API, which responds with the text that you see here. So it's a really simple application. The third container is a load balancer in front of the web application, and we'll see that in action in a moment. I want to draw your attention to the URL here. We're running on an instance of the Azure Container Service. This particular instance has three virtual machines on it, and we'll look at more detail at that very shortly. So first of all, let's go to our development machine and see what that looks like. So I'm a Linux guy. So here I am on my Linux development machine. On the bottom half of the screen down here, you can see the three containers that are running. We have the load balancer I mentioned earlier on. We have the web front end. That's the PHP part of the application. And we have the Java REST backend. That's the Java part, obviously. On the top half here, we have the PHP page that we're returning. And we're just going to make a simple edit. We're going to remove that word connect. So we're just saying hello world, make the application more generic. So we'll save that. We'll drop out to the console here, and we'll take a look at what it looks like. So I'm going to use a text-based browser. It's just very convenient when you're working in a text environment to use a text browser. And I'm going to look at the version of this application that's running locally. And you'll see that we have that changed. The word connect has disappeared from the title up here. So that looks good. Normally we'd run some tests at this point. I'm not going to do that in this in the interest of time, but we do have tests available in this test application, which is available on GitHub, if anybody wants to have a look at it. So one other thing that we want to do here is we want to make sure that none of our changes have actually broken the application in terms of performance. So we're going to scale up the application. So using this command Docker compose, and we use the configuration file that's a YAML file there called Docker compose dev. We're going to scale the web container up to three. And so what happens here is we're going to start two more containers. We already had one, we'll start two more. This takes just a few moments. And when complete, we'll have three containers running. You can see that there are two new containers appeared at the bottom here to show that they're running. And now we'll use the links browser to go back again. Looks just the same seems to be working. Let's check that we're going through the load balancer. If I refresh, keep your eye on where the host is here. You'll see that it changes to the different hosts, which they have three of them going in a round robin through this load balancer. So that all works great. Let's commit that change now. Let's make that change available to our development team. So this is backed by commit git. So I'm going to do a git commit. And I'm going to say trigger builds for demo. And then push that into the repository. And so there we go. We've now pushed our changes to the rest of our development team who can now start working against the latest code. So Docker in a development environment is great. It allows us to build a closer to production application on our laptop or dev work machines. However, in reality, there's only so far we can go on a single machine. What we really need is a staging environment that's even closer to production. Such an environment will span multiple hosts, possibly multiple geographic regions. We'll need redundancy and high availability. And we'll need the ability to scale both the application components and the underlying hardware on which those components are running. All this needs to be managed. This is where the Azure Container Service comes in. This is a forthcoming service that allows you to quickly create a cluster of Docker hosts with a preconfigured set of masters for managing the application running on those hosts. Azure Container Service includes tools such as Docker Swarm and Apache Mesos to orchestrate your applications while your underlying hardware is managed through the Azure Resource Manager tooling. Let's take a look at the Azure Container Service that we're using for our application. So what we're looking at here is the Apache Mesos UI. This is running on a Linux virtual machine again. This time I have a GUI display, and so I'm able to use a proper browser. This is inside of the Azure Container Service, and the Mesos interface here is showing us what's happening on that container service. So the first thing we should notice is that we have three agents available. These are each virtual machines that are available to us to deploy applications to within this cluster. If I scroll down a little, you can see that that is giving us some resources. We have three CPUs and 2.5 gigs. So these are just small virtual machines. We're using 1.2 CPUs and 1.5 gig of memory. And what we're using them for, you can see over here, these active tasks, and you'll recognize these. It's the load balancer, the web front end, and the REST API that we were talking about earlier on. But in this instance, they're running on the container service. Those are being managed by a project called Marathon. And here you can see, again, the three applications being managed, the amount of resources that each one runs, and the number of instances that we have running. So if we wanted to scale up like we did on the development machine, we can click through to the web application, click on scale, tell it that we want, for example, three, and then click OK. In the interesting time, I won't do that now, but it just takes a few moments to create the new instances just like it did on the dev machine. You can also do this through a REST API. All of this information is available for you, and the changes you would want to make are available through a REST API. But what I promised we would do is we would see that our application would automatically deploy to this cluster for us. So let's look at how we've done that. Here we have a project called Jenkins. And Jenkins is a continuous integration service. Again, it's an open source project like everything else that we use in here. And you can see that we have three tasks running here. We have the integration tests, and this test is triggered when it sees a change in the code in the Git repository. That runs, if those tests pass, then the load test is run to make sure that we've not broken performance. And if that passes, then deploy to staging is run, which actually pushes the new containers into our development cluster. So if I go to this tab, we can see the application. And this is the same version that we had running earlier on. You'll note that the URL is the same here, and that we have the Hello Connect world. But if I now refresh, we see that the word Connect has gone. And so we see that the developer changes have gone through version control, through the Docker Hub, where they've made available to the rest of the development environment, and then on to the staging server, which in this case is being run by the Azure Container service. So there we have it, an end-to-end deployment using Docker and the Azure Container service. Thanks very much, and enjoy the rest of the content.