 So, good morning everybody. Good afternoon almost. I almost didn't make it here today. Last night I spent seven hours at SFO, my flight got canceled, switched terminals, so I made it. I got a Virgin flight, left, I got in my hotel room at 1.30 a.m. last night. So happy to be here though, I've really been looking forward to this, so I was going to drive if I had to. So my name is Bruno Turkley. I'm a principal software engineer at Microsoft. Let me do a quick bio here. So by trade I'm more of a software engineer. Let me describe kind of what I do at Microsoft. So I am an orally author, so I've got my second class coming out, and these are long classes like eight hours. I work with ISVs who are migrating to Azure. So we're talking about the customers I work with would be Docker, GitHub, you might have heard of these companies, Red Hat, Messosphere, and so I'm in this space at Microsoft. I help these, because I'm in Silicon Valley, that's where all these companies are, and I help them bring aboard our platform. I also kind of work with the leadership team at Microsoft. Once in a while I get introduced to a company that we're thinking about working with, acquiring, investing in, and I kind of evaluate their software and see if it makes sense for our portfolio. One of the things I'm passionate about, I'm going to talk about today, which is the world of containerization, the world of distributed computing, which I think is the next big kind of, the new big thing in my view. One day we're going to laugh at the fact that we named our computers, that we actually gave them cute little names, it's just not, it doesn't scale. So we'll talk quite a bit at the end, we'll talk about that. So just by show of hands, how many of you here are actually developers or software engineers yourselves? Okay, about half. How many of you are like IT pro, admin, admin? So this is the right place for you. I plan to cover both those topics appropriately. So this is one of the classes I've got today. I do a bunch of stuff with Java. I show you how to install Redis, Postgres, MySQL, Mongo, all these different open source software, and then I show you how to develop software to do your CRUD operations against them. And I've got another one coming up for Java based web services and as well as containerization. So a bigger view of what I'm going to show you today. Okay, excellent. So when we think about today's goals, it's going to be about kind of what is Azure? What about the portal experience at Azure? Take a tour of a data center, which I did recently in person in Dublin, Ireland. Very exciting. I'll talk a little bit about provisioning infrastructure, Linux based infrastructure. Lots of hands on. I can never count on the networking when I'm doing these talks, so I've recorded it and it's going to just march right through. It'll be a very visual, fast paced hour. You will not be bored. I'm not going to just kill you with PowerPoints all day. I'm going to show you things in action here. So I'm kind of ready to go and do that. If it's okay with you, I'm just going to jump right in here. So when you think about our data centers, and I'll show you kind of the map of it later, basically you have 24 global data centers. And if you think about cloud computing today, you look at the three big cloud providers. One of them is a retailer. One of them helps you find things on the web. And I'm going to talk to you about the Microsoft data center here. So notice we have basic infrastructure, compute, storage and networking. When you think about compute, you think about virtual machines. But really what we're seeing more and more of and this is an amazingly big phenomenon is the world of containerization. A lot of people think that Docker invented this, but does anyone know that it's been around for a decade and a half all the way back to the early Solaris days? And so this is kind of like the brave new world. Everyone is doing it, including enterprises in test and dev. We have yet to see the maturity of this in production environments. There are a few leaders, thought leaders doing that. But if I were a betting man, I would say in a year or two at most three, the world is going to be doing containers in production. So you can see the writing on the wall. So if I was preparing for the future, I would be thinking distributed workloads and that's where I'm putting my career. That's that's of interest to me personally. And that's what I'm going to talk to you about today towards the end of this class or this session. You think about storage. There's a lot of ways to store your data. We don't need to enumerate all of those. Blobs, tables, queues, you have no SQL, you have SQL relational. That's a big deal, of course. That's why I did a class on it. Networking. Traffic manager is one product. Let's say you have two data centers, one of them goes down. Wouldn't you wouldn't you want your customers routed to the other data center where you've replicated to? Or you have a customer they want to get to your application. What's the fastest data center? Well, traffic manager can help you do that. Express route. You want to hook on premises to the cloud, but not be on the web. Kind of a direct connection with 10 gigabit throughput. Express route. So some of the basic infrastructure, but really, I think the future is more about platform as a service. For a long time, developers have loved to log into their VMs and tweak them and set them up. But the trend, of course, is a more abstracted perspective. And if you think about some of these here, let's take the one web apps there on the web and mobile. That is Apache Tomcat or IIS running as a service. And that's what my class kind of shows. You deploy to that environment, and the Azure Fabric Controller automatically scales it, maintains it, updates it, patches it. You don't sit there and worry about a web farm and load balancers. You let the infrastructure do that. So I think the trend is going to be more platform as a service. And specifically, you're going to see kind of orchestration offered as a built in service. Microsoft has now in preview mode the Azure Container Service. And a lot of people are going that direction, obviously. So that's one of the big directions here. What else do you see here maybe? You know, when you think about the categories of software, obviously, the Internet of Things, machine learning, data science, that's a big deal. I think some of these pillars are fairly obvious to you. What I want to do is show you what the portal, what some of these might look like. So let's take a quick tour of the portal, the Azure portal. I have it up right now, but I have a little quick video I can show you of it. Here is the Azure portal. And imagine I'm going to say I want to provision something new, something Linux based in our example. Obviously, we support Windows or Microsoft, but I'm here to talk to you about Linux. So let us those categories, the same ones we saw before. And that's how we organize it kind of in categories so you can find things easier. Notice all these container apps, right? So it's built right into Azure, the support for containers, Nginx, Redis, Postgres, the most popular containers out there today. Now, I work with partners and part of what I do is I help the cloud errors of the world get their infrastructure running in Azure. So you go here and say I want a Cloudera cluster or a GitHub or whatever, Hadoop, and it automatically provisions. You don't sit there and configure it by hand. That's the whole point of the Azure marketplace. Lots of Linux based workloads. I'm going to give you some comprehensive lists of how all that looks. Maybe you want to do an Ubuntu server. Maybe you want my SQL that's been clustered in the Percona cluster. So I'll kind of talk about that as well here today. So let's take a look at, say, one more here. Let's say I want to do a new Ubuntu VM. Now, obviously, you want to do things in the command line. You probably aren't going to go to the portal. So we're supporting a lot of the distros, you know, free BST and the like. So there's a lot of activity on onboarding Linux distributions onto Azure. So let's say I'm going to go search here for Ubuntu, because that's what I'm interested in provisioning. You're going to have a few choices to make. What data center do you want that thing in? How big of a machine do you want? How big do we go? We go up to 32 cores, 448 gigabytes of RAM, bunch of networking, you can choose SSD storage, right? Super fast IO. You can have, say, Infiniband throughput on your networking side, if you like. But here you're giving the VM a name, and it's going to resolve to some DNS name, or you can give it a public IP address. And I can show you how you connect up to these, pretty straightforward. I'm going to call this VM Scale X. I'm going to give it a username password. Nothing interesting there, really, per se. So size is that next number two option there. And in size, essentially, you are defining the size of the VM. Maybe you want to calculate, what did you see in the paper? They calculated a prime number to 22 million digits. Maybe you need the power of the G5 or a set of G5s. So these are all of our machines we have available. DS series means SSD storage. But we have ones that are optimized for high CPU, ones that are optimized for high memory and so on, depending on the workload, right? So here's the G5. That's the big screaming machine we have. I mean, it doesn't come cheap either, right? Because it's a heck of a piece of hardware. But that's the guy you might want to do for, say, deep analytics, machine learning, high compute workloads. So this is essentially one way to provision your infrastructure. But all of us here probably are maybe using Chef or Puppet or the Salt Stack or the command line, Python scripts. Obviously that's where we're kind of headed as an industry. This is, you know, for you, the developers spinning up a VM. Let's go on to the next topic here. So I recently was in Dublin, Ireland, I went to the data center tour. If you've ever, anyone here been on the data center tour? Super interesting to go on those, like incredible to watch what they've done. Let's go take one together right now. This is the global footprint of Azure today. Millions of servers everywhere. Now when you think about where we put these, there's a few factors like 30 of them. But the main ones would be proximity to customers, the availability of talent, the bandwidth and networking capacities available to you at the time. And then the other one you might not think about is energy, right? You need energy to run these data centers. So in reality, so there is like a bunch of objects in the cloud. I mean, it's growing very fast today. Everyone moving to the cloud. Some laying out reluctantly, but it's clear that we're moving there. Tons of fiber. There's a Dublin data center. And it's a very automated location. The ratio of machines to people is mind boggling. And that's why they're so cost effective. 2004 to 2007. Someone have a radio on in here to design our own large banks of batteries ensure electricity. So here you can see the battery backup in the event of a short term power disruption. If you do have a radio, please shut it off. Emergency generators provide backup power for extended outages and for planned maintenance. I'll fix that right now. There we go. That's the dam. Apologies for that. One and a half megawatts per year. This thing generates very modular data centers. A lot of it is on the roof in Dublin because they have the perfect temperature between 20 and 80 degrees all year long. And so they have swamp coolers. They don't even have traditional air conditioners there. The Chicago data center works off these modular containers, these shipping containers pun intended. And so inside of these are thousands of VMs. So when enough of them go bad, we actually replace the whole container. So talk about modular architecture. These are the way the Chicago data center set up. These things are very secure. These data centers getting in. There's a bunch of biometric surveillance cameras that record everything. Here you can see the cooling systems on the roof. They literally open the roof up to kind of let the cool air in and the hot air out. So things kind of migrate upwards. We went out actually on the roof. It's pretty mind boggling. These data centers after seven eight years, they don't get retrofitted or upgraded. They tear it down and build a new one. The technology is moving that quickly in the science here. Very secure, not so easy to get inside. Even as a Microsoft employee, I had to get a background check and show him my passport. Pretty safe. Each customer that has their workload running, even if you were to make it in, you wouldn't be able to find not even the employees know which customers running on specific VMs. So obviously highly automated. And I really enjoyed that tour. Just the diesel generators were amazing to me. They're actually cooled. The coolant in these diesel generators is diesel. And they run them like once a week. So apparently that those diesel generators can power the data center indefinitely as long as they have access to diesel. There's a lot of compliance, of course. Another reason to have lots of different data centers is all the laws around compliance. If you go to Europe, they're very concerned about the laws, the privacy laws and so on. So when you think about security, here are some things to think about that Microsoft takes very seriously. We invite you to try to hack into your VM. But if you decide to do that, you need to let us know because we're going to shut you down. We might think you're a denial of service hacker. So we do invite you to go try to break in. And we invest heavily, obviously, there's a lot of interest in being secure at Microsoft. Okay, deployment. Now historically, when you've been deploying your stuff, there's been two approaches, the imperative approach where you write script, and go through and program things out. And then there's been the more declarative approach that you seeing him below. That is, you define a JSON file or a YAML file in some other environments to really say, this is what I want. The way to kind of think about it is, do you want to define the end result, the blueprint, or do you want to actually go through all the steps? Because you know the problem with that is, if halfway through something goes wrong, you made a mistake, how do you go back and clean things up? The other kind of challenge is, how do you do things in parallel? There are companies now spinning up 1000, 2000 nodes at a time. If you use the declarative approach, the fabric controller can go notice, hey, you want 1000 VMs, let me do them in parallel. If you do the programmatic approach, you can optimize like that as easily, you typically go one at a time. So this is another big investment for Microsoft. And we'll talk about this a bit today is how do you provision your Linux infrastructure in a public cloud using this format and tools? So when you think about all the templates, all these blueprints that we make available, this is the list. I had to write a macro to animate all these. I wasn't going to go one by one and do this. So this is all the templates that we have here. And if we go, say, search for some of this, let me see if I can find you one. So if I go Azure quick start templates, you'll see these, all these templates, hundreds of them that define various aspects of applications you might want to provision in Azure. So let's maybe do a little quick demo what that might look like here to jump start your provisioning process. It's all there at GitHub. You can just do a git clone locally to your machine, get it all there, modify it to your heart's content. I'm the one that worked on the GitHub Enterprise template. So there's two main files, a deployment file and a parameters file. And you just issue them at the command line. The parameters file are things like what's the username, what's the machine name, what's the machine, things that change every time you do the deployment. And then the deployment file is the resources you want, the storage, the networking, the operating system, all the core stuff that makes up your deployment. We also have a deploy to Azure button so you can immediately go from here to the portal automatically. So let's say I want to do my SQL cluster. What's involved in that? How do you do that? Well, because we have good connectivity here today, I'm impressed, I'm able to actually go and do hands on a little bit here, which I'd rather do. So here is, for example, the MySQL deployment. Now at the command line, which I can kind of show you how you do some of this stuff, you have the deployment file, which is all the things that make up the deployment. So if I search, for example, virtual machines here, let's go to raw so you can kind of get a better picture of this. So if I search for virtual machines, you'll see here the virtual machines that I'm going to provision. Now notice they're passed in as variables and parameters, the network card and so on. So this is the custom extensions. For example, here I can actually execute scripts from these templates after the provisioning takes place. And this is good if you want to do things like create a database, set some permissions, code to execute right after you provision the VMs. So here are some of those extensions. Now if we go back to the command line, maybe we log in over here. So here's GitHub, the one I worked on. So if I look for Azure Deploy over here, you will see my Azure Deploy file. Let's go and take a look at this one real quick. We'll go to say the resources section, because there's three sections here. There's parameters, variables and resources, but resources is where you define your storage account. Let me put some line numbering in here. Notice on line 85, we got the storage account, your public IP address and so on. So you specify your networking, all this stuff that represents your deployment. And then on the command line, I have a file here, I'll just bring it up here, deploy.sh. You just say go ahead and create, put it all in a resource group, it's just a name where you group everything together, put it in the west region. And here's my deployment file that we just talked about. You pass in the deployment file, that's where you define your infrastructure. And that parameters file is the things you want to change, like the name, the location, and the hardware. So to sum this up, what I'm showing you is the declarative approach we take to provisioning. Go back to the deck. These templates. So let's take another quick kind of review tour of this. So there's the command line that I showed you. And there's the deployment file with the resources, which is what you want to build out. And you essentially execute it out. So that the basic structure is as follows. It's a JSON file with three sections. And the three sections are parameters, variables, and resources. This might be a TMI scenario where, hey, there's too much detail here, Bruno. But again, it's useful to know that there's this declarative approach to building out your infrastructure in these three sections. And there I kind of explain what those three sections do. So in resources is where I'd actually type things out. So I'd go and edit that resources that I would add my networking, my compute, my storage, other things that I want, even scripts I want to execute automatically after provisioning. That's the Azure Resource Manager. Okay, so deploying GitHub, I kind of talked about that one. That one's pretty straightforward. But I will show you is basically how you could go and remote in. So that's kind of obvious to most of you that are developers or admins just remoting in. It's what many of us do day to day. So I'm just going to walk through. I kind of had the network connectivity, so I was able to show you this already in person by going to that folder. So I'm just going to go past this one here. Let's talk a little bit about MySQL. That was that one template I showed you to do in MySQL cluster. So if you think about the infrastructure we want to do in this case, it would be you're going to have your application tier. And you're going to build out your web tier, whatever that might be. And so what this template is going to do is build out that lower infrastructure, the data tier, three kind of load balanced MySQL VMs with a Percona cluster. Now the template that I showed you, it's going to have to provision a few things. It's going to provision all that in one template. So really in one command, I can build all that infrastructure out using this template mechanism. So I think that's really the takeaway here when you think about the Azure Resource Manager, is I can start building out this infrastructure with these JSON files at the command line. And so there's two links called the Azure cross-plat CLI. So if I say Azure by itself, you'll see all the commands that I can actually execute. I could say Azure, for example, VM list. And this will go ahead and list out my virtual machines, provision new machines, et cetera. And then obviously with this, I can pass in those templates as well. So I can do it manually in the imperative way, or I can pass in these big complex templates. I'm going to fall soon. Is everyone OK? Are we in good shape? OK, so I wanted to show you real quickly maybe at least the clone command here. So the way you would start working with these templates is as follows. Go to the GitHub repo, grab the endpoint here, go to the command prompt in some folder, and do a Git clone of the template mechanism. Once you do that, obviously all those hundreds of files are stored locally on your machine. And you can go edit them, and you're ready to go at that point. So I'm going to go to the Azure QuickStrike templates folder now that it's been replicated. And you can see there's everything here that you could imagine, just about everything. So in our case, I was talking to you about the MySQL Percona cluster. Well, I just go into that folder, and you can see the files that make that up. So those are the files I go edit. Now if you notice that azurepxc.shell script there, that's the actual, let me go back a second. That's the script that executes after provisioning on each of the VMs. So that's the mechanism that allows you to go in after deployment and do some work that might be relevant for your deployment. It's called a custom script. And then, of course, the templates, which I talked to you about as well. And that is the things that you want to pass in, the parameters part of it, the name, the user name, and various things that you want to be able to pass in during deployment. OK, great. Talked about this one already. Let's go to the next slide. So why am I showing you a picture of a truck with Gerber baby food on the back? What's the point here? Anyone know where I'm going with this? This revolutionary technology. Well, this is essentially the birth of the container back in 1956, when this was the breakthrough at the time in the world of shipping. And clearly, this has been a breakthrough as well in our industry. So docker and containers. How many of you are in the space of containerization in your role today? About half of you are dealing with this, so you're going to find that it's one of the growing trends. And the reasons are pretty simple. I'll get into the value proposition in a second. But the ability to break things down into running containers is changing a lot of things. The speed of deployment. Let's get into the things that are enabled by containers, some of the value proposition. You can run these apps in isolation. And I'll give you kind of a lower level diagram of that in a moment. But essentially, you can start deploying app A and app B. Each one has their own dependencies. And they don't interfere with each other. They might have different G-Lib libraries they're using. And you can actually run them in parallel without a conflict. That's a big benefit. How fast do these containers fire up? Seconds compared to VMs, which are in general a minute or two at the best. So running apps in isolation, you're abstracting the plumbing. You're democratizing distributed apps because now you can start bundling them together. And we'll talk about that later. I'm going to show you Docker Machine, Docker Swarm, and Docker Compose. Getting them into productions a lot more fast. That's really the big app. If you look and read about the value of fast deployment, there are studies that prove that customers are happier, employees are happier, the software has fewer bugs, and you're able to actually innovate more quickly. So it's more than just cost. It's actually better for your business to be running to production more quickly. So if you take a look at this diagram right here, I'll talk about microservices architecture here in a little while and how Docker is really paving the way for microservices architecture. These are some of the points I just raised. But at the end of the day, it's about microservices. So this is the change away from monolithic applications, the three-tier architecture that we've been all working on. Companies now are breaking things down into microservices. So that means that one of my dev teams might be working on the notification for taxis. I thought I shut my mail off. Excuse me, sorry about that. So I might have another team doing payments, another team doing the passenger management. And it allows me to really break down a complex problem. This also enables other things, like I can update the payments without affecting other sections. So it enables a faster deployment. So this is the other giant trend in parallel with containerization I think that's happening, is this notion of connecting these up with HTTP, some rustful API to basically bridge together. Now there are obviously downfalls to some of this. You could argue that this is more complex by certain measures. But again, this is the trend that we're seeing in the industry. People are moving to this type of an architecture. It ends up the bottom's in the lower, the things brought up in the lower right of the main reasons. So when you think about a Dockerized app, I know you're not here to listen about Windows, but Windows 2016 will be containerized as well, released sometime soon. It's just the way people are going with their technology today. So when you think about Docker containers, we'll talk about how they can run anywhere. So this is the new architectural style I was talking to you about, the microservices. Now if you think about all these images you can go get from the Docker repository, and the Azure has the same repository. People just download these images. When you run these images, they become containers. That's the vernacular here. These are all the available images. So if I want to stand up Nginx, I just go and say Docker run Nginx. If I want to run MySQL, Docker run MySQL. We have still the proof that you can run MySQL and Postgres at high scale in a containerized world. This is good for dev and tasks, but the world has yet to see whether this is going to work in production at high scale for databases. We know for web servers it kind of meets the need for the most part, because you can just add more containers. But these are the apps that you can go get today from the Docker repository. And you can reference these from inside a template, an Azure template that I showed you before. So containers can also be deployed through the templating mechanism. So this is just another kind of reiteration of microservices here. And in a moment, we're going to take a look at how you go provision these and run them at scale in a cluster. Again, it's the new architectural style we're seeing in the way people are writing their applications. It's all about being able to update your app and get it in production more quickly. That's kind of perhaps the main motivator here. So when you think about virtualization, there's been two perspectives, right? Do you use just virtual machines or do you use containerization? What's the difference between the two? Well, when you think about virtual machines, if I want to start up my apps, I've got to start them up in different VMs, and I spin up a new operating system on top of my host OS. So that guest OS is VM 1, 2, and 3. That has to spin up three versions of the operating system. That's why it's slow. With containerization, all the running containers share the host OS. So you're not firing up every time you want a new app with its own binaries, a whole operating system. You're sharing the operating system among other containers. And that's the real value proposition that we have here is that you have one operating system in containers, but three operating systems booting with virtual machines. Your question might be, well, Bruno, I thought that you run containers on virtual machines. Well, we'll get to that topic next. So you can run these containers on bare metal in a cloud or it's saying your own private data center. So when you think about these virtual machines running in a public cloud, you have your cloud-hosted server, your Ubuntu box. On top of that, you have your host OS. And in Azure case, it's Windows. Yes, that's right. When you run Linux in Azure, it's running on top of a Windows hypervisor environment. So the hypervisor steps in. There's a Docker extension that is part of that hypervisor. And then you have your guest OS, which could be Linux, it could be Windows, it could be both at some point. Now when you think about the guest OS Linux, on top of that, you have the Docker demon. So when you install Docker, a little executable kind of resident application, a demon, if you will, runs. And then the container is managed by that demon. And that demon communicates with the Docker extension to kind of make it all happen. And then app A and app B can be running in a container, although in general you run one app per container. And then another container can be running other apps as well. So that's kind of the high level architecture of a running container in a public cloud today. You might argue that there's double virtualization happening, and there is. And I think the primary reason is there's a security issue here. The world is still trying to work out the security model for containers. And whether or not in a multi-tenant scenario, you're comfortable with your VM running alongside someone else's VM with containers. So if you think about what you're really getting here, if you notice in the containerized world, app A and app B each have their own dependencies, their own binaries, their own libraries. And they don't have to be synchronized. They could each have their own version, maybe different by some small thing that would make app A and app B incompatible if they run on the same VM. But in the world of containers, they each bring along their own version of the binaries and still be able to share the operating system. But if I want the same thing with just a virtual machine, I actually have to do what? Bring up another VM. And that's where the slowness comes into play, is that you have to bring up another whole VM just because there's application binaries or libraries that are not compatible. Talked about these. Yes, sir? We have our hypervisor has an agent that manages that conversation with the Docker daemon in terms of its visibility and orchestration in the cloud. And I'll talk about the Docker daemon getting installed in a moment. But yes, there'll be a swarm agent in that environment if you're doing swarm and the Docker daemon at the same. Both of those will be running on the Linux VM. I'll demonstrate that, actually. There's a pretty cool demo coming up in a minute. Thank you. So that brings up the next point here, orchestration. How do you decide where these containers go? You have 100 machines in the cloud. Do you really want to think about where those containers should go in that cloud? Now, there's a lot of technology that lets you define what I want my cache to be with my web server, or I want my WordPress to be on the same server as mine. You can set up those affinities. I'm not going to get into all those details. But in general, you want to just say, go deploy this. I don't want to have to worry about what machine is available, how much room I have left on it because of other containers, and so on. So that brings up these technologies today. And I would add to this the Azure Container Service, which is in preview today. So these are some of the big products that orchestrate the running of your containers in the public cloud. In fact, I just did a video with Mesosphere that's going to be released soon around orchestrating Spark and Kafka in a large workload, automatically orchestrated by their software. So when you think about this space, Docker Machine is the approach I'm going to talk about today. I'm not going to get into some of the other ones because I have some demos here that might be interesting to you. Docker Swarm, Docker Compose, the Azure Container Service. So Machine is going to allow me to set up Docker on any number of hosts. It helps me provision them on a bunch of raw machines. Docker Swarm lets me have a clustered network. Docker Compose lets me define the way I want my applications to be bundled and deployed on that network. So the great takeaway for you today is to have a working knowledge of what these things do because they're fairly significant in the world today is Swarm Compose Machine. Now you could argue some of the other products out there are competing for this. The world is still determining who's going to own orchestration. Is it going to be Mesos, Mesospheres, is it going to be Docker? And the other public cloud providers, there's Kubernetes, right? Amazon has their container service API. Who's going to control orchestration in the future? That is yet to be determined. So this is what I want to demonstrate to you today. So step one is, how do I provision the VMs here? How many do I have in this particular place? Well, I have my client, right? But there's four, really, I want to provision. The Swarm Manager, think of that as the master mode. And then three slave nodes, or Swarm nodes. So my running containers are going to be put on two, three, and four. Notice I'm also going to want to install the Docker Demon and the Swarm agent here. But the one that's going to do all the work for me is the Swarm Manager, the Swarm Master. So I'm going to basically say to the Swarm Master, and I'm going to show you this, go run my containers. It's going to figure out everything for me. I don't have to think about these three nodes. To my software, it looks like one big container to run my workloads. And so that's what we're going to look at first. The first step is to use Docker Machine to provision these four machines in Azure. So let's look at that. So I'm going to build up a little shell script that's going to show you the various commands here. I'm going to need to put in my Azure subscription ID. And I'm going to create the Master and the three nodes. So what we're doing now is just writing a little shell script that I'm going to execute to create those in Azure. There's nothing really fancy here. The assumption is that I have Docker Machine installed on some VM in Azure. And this client that's going to control everything is going to set up my environment. And that's what we're doing here. We're setting up a swarm cluster here. So notice the command here to do the Master and the three nodes. So once I've created this shell script, I'm going to quit out of here and I'm going to execute this script. And we're going to see it actually create the three VMs or the four VMs in Azure for me. So let's go ahead and run this thing. Now through the miracle of video photography, I've shortened this down for your benefit. So I think it took me about seven minutes to run this. But you get to see it in less than 30 seconds. So what it just did is it set up those four machines with Docker. You can see in the portal that I'm part of the way through. So they're showing up at the portal now. The VMs have been created, in other words. So at this point, I've got this thing set up. But it's not yet, no, it's a cluster. And I have not yet orchestrated workloads on it. I've just set up the four machines with Docker. I have a couple more steps to do to make this happen. So we need to define of these four nodes, who's the swarm master and who are the swarm node slave nodes. And that's what this next demo is going to do. So we're going to run this command to create the swarm that's going to basically say, OK, I'm creating a swarm, you're going to get this cluster ID now that you need to track. It's that in the green box. You need to keep that. That represents the ID of the cluster I'm creating right now. I haven't fully assigned everything yet. That's what this command is about. I'm going to now define the master node with the following command. It's pretty straightforward. I don't even need to name machines. I just say, this is swarm master. You can see that word there. Using that token, we just copied. That's the cluster ID from the step before. And I did swarm master. Now I need to do the three nodes. So let's now write that same script practically for those. It's going to be very, very similar. You're not going to notice a tremendous amount of difference for the next ones as well. The nodes themselves, let's do those. It's going to be pretty much the same command we saw a moment ago. Obviously, I've created some certificates to be able to do this at the command line, the open SSH command. I've uploaded the certificates to the Azure portal. So we're just creating the nodes now. OK, when this command finishes, we have our swarm cluster. What do you think is the final step, the kudigrah over all of this? What's the end game here? To run containers. Not only that, to be able to scale those up and scale those down with just a simple command. So there you can see my nodes here. Notice that the Docker machine LS command lists all the nodes in my cluster. Again, this is in some data center in Azure. Who knows where. But the ultimate next step is to start running containers on this infrastructure. And I'm just going to say, I'm going to define what containers I want to run here in a moment. So at this point, we're very close to actually doing something useful. We've already got our environment set up. Now it's just a question of defining which containers or images I want to run. Those images become containers at runtime. OK, excellent. Let's go and do that. So we've defined the nodes. Now, there is a container out there. You may have heard of it folding at home. Does anyone know what this container does? It is a protein folding algorithm that's trying to find cures to diseases. So if you go look around the web, it's kind of one of those things. What do they call that search for extraterrestrial life, SETI? It's kind of like a SETI except for gene therapy. So I'm going to run that container. Now that file up there Docker compose is where I define the images that I want to run as containers. Docker compose.yaml. All I have in there is a word called worker image in the name of my container. Now imagine that you want to run a bunch of different things, maybe MySQL with WordPress and any other number of literally thousands of containers out there on the Docker Hub registry. I'm keeping it simple. I'm just going to run this one image as a container. But you could modify Docker compose to be much more complex than this. So this is kind of the exciting step actually. I'm going to define this declarative syntax for running my image as a container. And so the next step now is the important next step is to run the containers on my worker nodes. Notice my swarm master doesn't do that. My swarm master is just keeping track of the infrastructure and doing the commands on my behalf. Normally, without a swarm cluster, what would I need to do? I'd have to run the, issue the Docker run command on each one of these myself. But instead what I'm going to do is say, execute that, use compose to just run that workload. And I'll say, give me three of those, or give me one of those, or give me whatever. I'll just say, go do it. I won't have to worry about individual machines. That's the key point here. So let's do that. It's so amazingly easy for this final step. So I'm going to basically say, every command I issue now, send it to the cluster. Not to the machine that I'm on right now, but every command from now on, send it to my Docker swarm. So I set that up now. Every command I issue now will be issued against the entire cluster. So here I'm defining that worker. And I'm specifying the image that I want to run and how many I want will be issued at the command line. So that's it. My Docker compose is done. Here's the magic command at the end of it all. Well, not this particular one. When I do my, I'm just showing you that I have a Docker compose file here. Now I'm going to scale it. I'm actually going to scale up and down. I want three copies of that Jordan protein folding container to run. So I'm going to say Docker compose scale, give me three. It's going to fire up three containers. I'm not worrying about which node is doing this. I'm letting the infrastructure manage all that. So at this point, I'm going to say, well, what's running? The Docker compose PS command. Well, three of them are running. What if I want two? I say Docker compose scale equals two. It takes one away. So that was the final demo here. So what have I shown you? I showed you the portal. I've showed you how to provision with templates, all the available packages on Marketplace that you can choose from. And then I kind of showed you the whole microservices containerized world on Azure. And where I think the next level of activity in the cloud will be is running containerized workloads. Any questions? Are we good? Yes, sir. Right. So you're talking about the Docker VM agent running in the hypervisor. Let me see if I can find that slide for you. Just find that fancy drawing that we had. That's an artifact that manages the communication between the hypervisor and the guest or us with the demon. I don't have you. I can look that up. That's just a necessary artifact of making it work. But we can look that up. It might even be there to notice that if these are active or if they've failed, it might be looking at stuff like lifecycle of the running guest OS to make sure it's still active. So it might be a mechanism just making sure that keeping track of what containers are running. So remember when I did Docker Swarm and I said PS, maybe it's this extension that goes to the hypervisor and says, give me a list of the running containers. Good question. Any other questions? Yes, sir. I think that it is possible to scale. But I think if you look in the real world today, very few companies are running production databases that scale in containers. In containers, right. Yeah, apologies if it wasn't clear. Yeah. So running like an Oracle or a SQL server in a container like Microsoft doesn't actively offer SQL server in a container today. Thanks for clarifying. Yes, I definitely would not say that. Open source databases are awesome. Adam. The version of the Docker in the new Linux VM configures the Docker data to listen on a specific port to the given certs and it provides the launches of the given containers with Docker in the post. Yeah, I think it's an artifact of running in Azure to kind of manage the communication between. Yeah, I think that's fair. It's a good question. I'm wondering like if you install the Docker client on a VM, if it automatically validates that there's an extension there, I would have to. I'd have to dig deeper to give you a clear answer. Happy to do that. I have my email, right? I'm happy to follow up any of your questions. Contact me. Yes, sir. What do you mean by institutional governance? Government governance. Well, so we're launching the Azure Container Service, which is a very extensible way for you to layer in your own orchestration. We do give you the raw infrastructure that we have a lot of ISVs that I work with that build out their own kind of platform as a service on top of our raw infrastructure, and there's certainly nothing preventing you from doing just that. Is leveraging the raw infrastructure to build out your own service yourself, like for example, Elasticsearch has done that. I'd have to look at the dependencies that might exist for that, I'm not sure. I'd have to take it a case by case basis. But you are allowed to upload VHDs and run them in Azure. Happy to follow up. I'm easy to find. I'm all over the web. I like following up actually, because I learned something. Great questions, keep them coming. What time is it? Yeah, we've got four minutes. Use it up. Yes, sir, in the back. Good question. You're a gaming guy, Adam. Do you know Xbox? The question is about where does Xbox run? Do you know? On Azure. So it's in Azure. I don't know if we published the data center. It's probably replicated worldwide for proximity to customers. And I think it's starting to leverage open source even. I'm not 100% sure. Sir, do you have a question? What exactly are you trying to migrate over? So what operating system are they running? Which flavor of Linux? Yes, we directly work with Red Hat now. You'll find in the next month or two you'll see full support for Microsoft in a strategic relationship now with Red Hat. So those should come right over as is. I'm sorry, what was that last point? See what? Not the merit that. Oh, CPAN, okay. No problem. Follow up with me. I can look into that for you. There's my email. I can find out about CPAN. I think I saw CPAN on template actually. So I think it ought to be supported. Yes, sir. Which one? CentOS. It is supported today. Open logic. I can find out. But we do, I would say it's probably compatible I would imagine with your flavor of CentOS. He was asking about there's a community addition for CentOS. So we do support CentOS and I'm not sure exactly how it differs from the community addition. Yeah. That's my understanding is that if you want the full blown Red Hat you get Red Hat, but then if you want the open source non-Red Hat you go CentOS. Email me with that. I can find out for you exactly what that means. Happy to help you. Excellent. Yes, sir. We could use help with that. Yeah, that's in my opinion that is not as good as it could be. That's what I do. I grep through them, looking for similar things. Agreed. Go, my rally class talks about it. Yes, sir. Right? At the end of the day, I think hypervisor. Right. I understand that the... Thanks, everybody. Performances. I can't live without having full perform VNs. For others I'm running on Zen or I'm running on the hyper V running Linux a little bit looking at the first class citizen. I would like to see how the Azure and the hypervisor will treat Linux as the first class citizen as far as the performance and all the resource application. All of these are going to be on the Microsoft Fabric. Driver is going to use or... Because I couldn't find anything. Well, my first answer would be there's no substitute for testing yourself and really finding out if there's gaps and if there is, we can work with you to figure out why it's happening. But as with anything when you talk about performance it's all contextual about the workload, the type of hardware you've chosen, the flavor of Linux. I hate to give you kind of a wishy-washy answer but ultimately it is about let's test it. Let's compare it and figure out where the gaps are. I agree, it's a fair question. I'm sorry, I haven't posted them yet but I will post them. Just you can email me, I can share with you my slides. You're welcome, thanks for coming today. Thank you for coming, appreciate it. Thank you for coming.