 All right. Good afternoon everyone. My name is Trevor Roberts Jr. and I'm the Senior Technical Marketing Manager for OpenStack at VMware. And my esteemed colleague will introduce himself. Thank you, Trevor. Appreciate it. Good afternoon, everyone. Thanks for joining us. My name is Scott Lowe. I am an engineering architect at VMware working on the NSX team. Rest assured, though, that even though both of us work for VMware, there's really like very little mention of VMware in the deck. We're focusing on OpenStack and how you as a developer or as you as an operator can help your developers consume that capacity using some well-known open source tools. Okay. So before we get started, how many of you are software developers in here? Software developers? Okay. And how many of you are cloud administrators? Okay. And how many of you are both? Yeah. Okay. Just check it. We've got the All Stars in here. And how many of you have heard of like Vagrant and Terraform and Docker? Well, everybody's heard of Docker, so I shouldn't say Docker. But who's heard of Vagrant and Terraform? All right. Cool. All right. So Scott and I came up with this talk because we saw that the community has some concerns about wanting to reach out to developers and get them accustomed to using the OpenStack clouds with the tools that they already know and love. I know that the HashiCorp company has a great following in the software development community. Docker obviously has a great following, so we decided let's combine these topics and talk about them at the summit. So what we're going to talk about today, we're just going to give you a brief commercial about VMware in case you don't know who we are. Just want to make sure you do know. And then go a little bit over what we're going to actually talk about. We're going to talk about Packer for image building, Vagrant and Docker machine for software development stand-up and testing, and then Terraform for your production grade deployments. So VMware is a leader in cloud and virtualization infrastructure solutions. And not only that, we are involved in OpenStack. We are fully in on OpenStack. We are def core compliant, making sure that we're working hand in hand with the community with what our customers, what the community actually needs. We contribute to projects that we don't even include in our own distribution just to help out the community. And we're also hiring, so if you're interested in any opportunities with working with OpenStack on vSphere, I know crazy talk, but it's really happening out there. Just give our site a check out and find out more about it. Okay, so the workflow overview today. So first of all, we want to make sure that software developers understand that OpenStack clouds are available for the use and it's not some kind of foreign technology that is not going to help them. OpenStack can be very helpful for their software development practices. So it can start with configuration management, how I'm going to make sure that whatever I deploy on my laptop will run exactly the same way in production. And things that they can do along the way is incorporate image management, dev test, and production deployment. And for those of you who like taking pictures, I'll give you the cues when to take the pictures. It's a little too early days right now. So for image management, we'll be using Packer. It's not the text isn't showing up, but Packer is definitely for image management. For dev and test, you can use Docker machine or vagrant if you're not running a container-based workload. And then for production, you can use Terraform. So we'll go ahead and show you a little bit about that in a few minutes, but first we want to give you a brief introduction to those tools. So with Packer, you have a way of doing automated image creation and modification. If you've used Packer in the past, then you're already familiar with it. If not, it's a way of taking an ISO, your Ubuntu ISO, applying some instructions to it in a JSON format, and have it automatically output a build that can be imported into OpenStack. Or you can take an existing glance image, run some kind of configuration management tools on it or simple shell scripts, and customize it to what you want. So for example, in my demo today, I'm going to have Ansible customize my Ubuntu image and have it installed Apache and Git. And then there's vagrant. This is a development and test tool that allows you to specify deployment instructions in a vagrant file. So, yeah, you could do the same thing with Nova Boot, but software developers may not want to learn the OpenStack CLI, so this is a way of them using an existing tool that they're already comfortable and familiar with to start consuming your OpenStack Cloud. So they just specify what kind of image they want, what flavor, what name they want to give it, what networks to connect it to, and then vagrant will go ahead and communicate with the OpenStack APIs on their behalf and give them the infrastructure that they're looking for. Then I'll pass it on to Scott to talk about Docker Machine. All right, so before I jump in, do you want to go ahead and show your demos? Okay, let's jump into demos because... Who wants to see a live demo? Well, you're doing it live, you have a video backup just in case, right? You're doing it live, right? You're doing it live, okay, who wants to see a live demo? Anybody? All right, okay. So we're going to kick off and you're going to do Packer and then Vagrant, all right? Yes, that's correct. Perfect. So in the immortal words of Bill O'Reilly, we're going to do it live. All right, the screen's already clear. So for Packer, we have this JSON file that gives us all the instructions of what we want to accomplish. So it starts at the top with the provisioners. It's going to run after your image is already deployed. That's going to do the customization that I want, including the deploy Apache deploy Git. And so you have this builder functionality, or builder construct, I should say. All right, laser just seems to be down. But you have this builder type open stack. I give it my credentials. If I want to put this into Git, because we're all doing software revision control, right? Nobody is, like, changing the names of their files as a way of saving version one and version two, right? We're all doing good software development practices. Nobody is shaking their heads. Okay. So we take out all of the context that we want to make sure is secure. Like, we don't want to save our passwords into Git, right? So there's a way of substituting variables, and you'll see at the top if I scroll a little bit, there's a section for variables. Truth be told, I shouldn't be saving my username in there, but this is a private repo, so I don't really care. But I just wanted to show you the capability that you have with Packer. So it goes through, selects an image, and selects your networks, and all of that. So first, I'm going to validate my Packer instruction set to make sure it's not going to error out on me when I go ahead and run it. Okay. So let me cut my Remy file. You guys like my password? All right. We're all about security at VMware. And I have an even more secure password. It has a bang there. Okay. So the template validated successfully. So that's the first thing you do. Check to make sure that your instructions are going to work. And then I will go ahead and execute the build command. So this will go ahead and start building my image with the customization that I asked for. So let me open up a web browser. And we'll go see the changes that are being made. And I promise I'll increase the font size. All right. So I have an instance here that is... Oh, wrong cloud. I was testing you guys to see if I would... Or if you noticed the mistakes that I make. There we go. So I have this instance that's being built by Packer called Web Image Packer. And it's still up and running for as long as Packer is doing its customization. So for example, Ansible is going ahead and running the playbooks. It's installing Apache 2. Shortly thereafter, it's going to install Git. And that way I have my image ready for use in my vagrant setup. So this is going to take a little bit of time to important to glance. So I will... Let's talk a little bit while I'm waiting for my tab dancing to finish. So let's just... Let's talk about this a little bit now. They're taking an existing OpenSec image. So this could be a cloud image that got straight from Canonical or whatever, right? And then running the Apache Playbook against it. Now is it going to import that into Glance automatically for us? Yes, it will. That will be the post-process. All right. So when it's all said and done, we'll end up with a new image listed here that will be a customized version of that Ubuntu image. It'll have Apache and Git pre-installed. And then we can use that image as a jumping point for any of the rest of the things that we do. Right. So what I'll do is actually go to the Ansible instruction set while we're waiting. All right. So it's very simple. I want to have... Nope. That's the wrong one. Oh, you need to go in the packet directory? Yeah. I need to go into the packet directory. Yep. Too many tools. All right. You guys don't ever have that issue, do you? All right. So here's the syntax for Ansible if you've never seen it before. First, this is called Playbook. So you tell it what host that you want the instructions to run on. You tell it what remote user to use because we're going to be SSH-ing into it. And then Become is the keyword for sudo. For some reason, Ansible decided to change the keyword. And then you have your tasks. Very simple. Install Apache 2. And then you tell it, use the apt module. So that will take the Apache 2 package, install it. And then also my next play is called Install Git. Very simple. Use the apt module to install Git. It will take the latest version. So let me go ahead and check on my execution. OK. So now it's in the shutoff period because it has moved into the image uploading state. So if I go to my images tab, I'll see that web image packer is importing at this moment. And fairly soon I should be able to use it for my vagrant demo. So let me go ahead and check on that. Actually, this might be a good time to ask some questions. So if anybody has any questions on what we're doing, please use the microphones. I'm sorry to make you get up after all the partying that you've probably been doing last night. But it's a good way to stand up and exercise all that alcohol out. So I'm just trying to understand the workflow. So you took an existing IMG image, ran a packer, and you changed that on the fly. Right. So this is what is actually happening. So I took an Ubuntu 14.04 cloud image. OK. And I said, I want to make sure it has a web server on it. Make sure that it has Git. Because what I'll do is I'm going to pull down my cloud application. And I want to be able to use Git to do that. So I just took a vanilla image that I got from the community application catalog. It was already in glance. So you can either pull down something fresh or use something that's already in glance to do these customizations. And the customizations could be of multiple formats. It could be shell scripts. It could be puppet, manifest, chef recipes, or in my case, Ansible playbooks. It's really up to you. And if there's a provisioner that doesn't exist for packer yet, you can go ahead and write your own because it's all open source. The idea here is that you may have a set of applications that require a customized image. So rather than just using the vanilla Ubuntu image or vanilla of CentOS or whatever it is that you prefer to deploy and then having the configuration management tool do all the work post deployment, we're going to take some subset of that that makes sense to you and do the customization to build a custom image so that when you then deploy from that custom image, you're either all the way where you need to be or very, very close to where you need to be. And then the configuration management tool of your choice will just pick up with any sort of miner. You might be able to do all just via cloud init, for example. If you've already got the packages installed, you might have customization on cloud init, which then could save you some time at deployment at the cost of some additional time up front. Does that make sense? Yes. Just a follow-up question, you also mentioned you can use like ISO image as well. Yes, but if you're using an ISO image, that would be a local execution of packer on your desktop. And then the output would be image formats that you can then import with the standard glance APIs. OK. Thank you. In this case, we're taking an existing open stack image, running it on open stack as an instance, customizing it, and then outputting an image. But packer itself could also take a virtual box and an ISO and create an open stack image from scratch if you need to do that, for example. OK. Thank you. Thank you very much for that timely question. You gave me enough stall time to finish my glance import. OK. So we can see here that the image was created. I see my UUID here, so I could actually take that and put that into my regular instructions. And just to verify that I'm not talking crazy, I go back to my images view, and there's my web image packer there ready for me to use with glance. So let me go ahead and open up my Vagrant file. And for those of you who are keen observers or software developers, you may notice something that Vagrant uses a lot of Ruby syntax because it's written in Ruby. The point is, you know, just use standard software development best practices in the way that they have their syntax. So Vagrant allows you to configure one or more instances for whatever your needs are for your application infrastructure. And I can put in the variables that I want to use, including my SSH private key, which user I'm going to use, what endpoint for my authentication, as well as things such as my flavors. I can actually have if then statements because, again, this is just regular Ruby that's been repurposed for this tool called Vagrant. So if I have a certain server name, I can give it a bigger flavor, otherwise everything else can default to the small flavor. And I'm going to use here my web image packer image that I created, and also I tell it which key pair name to use and all of that. So once you define all of your open stack variables, all your settings that you want for your instance, you go ahead and quit out of the Vagrant file. Oh, one thing I did not show was that Vagrant also allows you to incorporate configuration management tools, like something as simple as a shell script file, as well as an Ansible Playbook. So I'll go ahead and show that really quickly. So now that I have the Apache web server built into my image, what I'll do is I have my website in source code control. So I'll go ahead and have the Git module pull down my web application so that it automatically comes up and running when I deploy this instance. Okay, so let me go ahead and run the command Vagrant up, and I'll tell it to use the provider open stack. So that's a friendly reminder to myself to source my open RC, which I did in advance. Sometimes I tend to forget, and it's going to go ahead and bring up my instance. So I'll let this run for a little bit. I'm actually going to first provision the instance, and once that instance comes up, it'll connect to it over SSH, and then pull down my web application, which would just be a simple static HTML file. So I'll actually let that run, and I'll switch over to you, and then we can come back at the end and see my web application. Sure, sure. So those of you in the back running the presenting, leave it on his laptop for a minute for what we're going to do, because I don't want to have you switch it back and forth while we're waiting here. So what you've seen so far is Packer and Vagrant. Packer to build a custom image. We used Ansible in this case, but it could have been a different configuration management tool of your choice to customize that image, and then stick the customized image back into open stack for further use. And then again, what we're seeing here is Trevor is using Vagrant in the same kind of Vagrant workflow, so developers are accustomed to doing a Vagrant up to create the environment, which may be one or more VMs. Now they're instantiating one or more instances on an open stack cloud, and then in this particular case he's also using Ansible again to do a second round of customization, in this case, to clone down the web server content and put it on there so that everything's all great and wonderful. So what we're going to do once this demo wraps up is we'll shift gears a little bit, and I'm going to talk about using Docker Machine and then Terraform to close things out. We may want to just go ahead and let it do its thing. We'll come back to it later, maybe. All right, so let's start out with taking a look at Docker Machine. So, AV, guys, if you could go ahead and switch us over, that would be great. No, it's fine. Not really sure why. It's fine. You guys can see that, okay? Yeah? Okay, all right. All right. So, Docker Machine. So how many of you guys are familiar with it? Anybody out there use Docker Machine? A few? Okay. You're familiar, of course, with the idea of Docker, and that is the use of Linux kernel constructs and namespaces, C-groups, all that kind of stuff to create application containers. What Docker Machine does is give you a tool that will help you provision instances of the Docker engine for testing. So what they were running into is developers wanted to use Docker containers to do this, but they needed... There was this kind of setup time to spin up a Linux instance, run a configuration management tool against it to have Docker engine installed or to manually install Docker engine, and then they could start doing their testing. And so the folks at Docker Inc decided they were going to write this tool to do that, and so they wrote Docker Machine. And so the idea behind Docker Machine is that it gives you a vagrant experience, specifically tailored at bringing up instances of Docker. So vagrant is far more general purpose. You can use vagrant to spin up like any kind of environment you want, and you saw in Trevor's demo that he was using Ansible to create a custom web server. Docker Engine, on the other hand, just creates Docker machine, excuse me, just creates Docker engine instances, and that's really all it does. Very specialized tool, right? Like vagrant, it's designed to work with Docker, you could use it against server hypervisors, you can also use it against cloud platforms like OpenStack, so you can actually have your developers use Docker Machine to spin up instances of the Docker engine inside and opens that cloud. And then once it's up and running, of course they can take advantage of all of the various tools and images that are available for Docker, so at that point they could pull down community images or images that they built and run containers and all that kind of jazz. So what I'm going to do real quick, and I have a live demo of this as well, is I have a system here running, and I'm just going to run this tool, Docker Machine LS, and what you're going to see, and I did this for the sake of time because it would just been too long to sit here and wait, is I've got a couple of instances already up and running that I have provisioned in advance, right? And what we're actually going to do is we're going to add capacity to a Docker Swarm cluster, because we can use Docker Machine to build Docker Swarm clusters running on OpenSecond. So in this case I've already spun up the Swarm Master and one of the Swarm nodes, and I did that just in advance because it takes about five to seven minutes. So we'll start another one here in just a second and we'll let it go while I finish talking and ensuring you what everything looks like. So I have a shell script that I'll show you what it is. Wait a minute. Oh, yeah. See, I did not put the reminder to source my credentials. Okay. And so while that's running, let me show you what that looks like. Nope, wrong place. And so the script that we're running and I'll just show you what this is so that you get an idea of what it looks like. Okay? It's an ordinary shell script. There's really nothing fancy here. We did some stuff up front just to kind of make it easier to customize over time. But what you have is you have some variables that we defined. You have a form token and that's how the swarm nodes discover each other. Okay? We'll show you a different way of having the nodes discover each other when we do the Terraform demo in a few minutes. And then we have the UUID for the open stack image that we're going to use, the network name to which we're attaching, the username that you should use to log into and then the external network where it's going to pull the floating IPs. And so then you run the script and the script just simply calls Docker machine and one sort of drawback for Docker machine right now is that when you use it with open stack, the command line is a little involved, right? So you may find it easier to write yourself some sort of wrapper script to come around that so that you don't have all of these command lines that ends up filling like three or four lines in your terminal to spin up instances, which is kind of the purpose of this. And by the way, all the stuff that we're showing you is in a GitHub repository that's accessible to you guys. We'll have the URL for that up at the end of the session. There's any of this that you want to follow along with. You can take those shell scripts and those Ansible playbooks and Vigrant files and all that and just consume it, right? So I had three files, one to set up the master, one to set up Node 1 and one to set up Node 2, and so we'll come back and check and you can see that it has created the machine, waiting for it to come up, waiting for it to be available via SSH, it detected the system and now in the process of provisioning it in this process, unfortunately, takes a little while. So we'll just let that sit there and run and I'll come back and talk about something else. So I did want to draw your attention to this thing at the bottom. You'll note that I've linked here to a GitHub issue with Docker Machine that we uncovered. This is a bug in many of the drivers that plug into Docker Machine currently. So Docker Machine uses a pluggable architecture that allows multiple providers to plug in the OpenStack driver that it uses right now. If you don't give it a private key to use, it will generate a private key automatically. And if I open up a browser to my OpenStack instance, then I'll show you what that looks like. So I didn't supply a private key and it just created one for me and you can see here once it comes up that there's this long, crazy thing there. That's the private key that it generated on the fly and that's fine. When I go to delete this instance using Docker Machine, so I'll run a Docker Machine RM and kill the instance, we'll actually find that it will remove that private key for us. So you're thinking, okay, everything's all great and wonderful, right? If you give it a private key, for example, you've already created a private key and you've uploaded it into OpenStack, yes, you're shaking your head, yes, that's what it does. And then you tell Docker Machine to use that key and then you go remove the instance with Docker Machine, it's gonna remove the pre-existing private key. Okay, so that's linked in that URL that I just supplied down here. So this GitHub issue tracks it. Currently, they open that provider if you supply an existing key pair and then you provision a Docker Machine instance with it and then you go remove that instance, it will remove the key pair. So just be aware of that, right? The folks at Docker Inc are working to try and resolve that, but as of right now in the current release of Docker Machine, which is 0.7, this behavior is there, just be aware of that, it caught me by surprise. Fortunately, I had a copy of the key pair and the associated passphrases and all that, but if you don't, you might find yourself in a bit of a situation. All right, so let's see where we are. Yeah, that takes a really long time, unfortunately. So let me just show you what this looks like real quick and I'm going to, okay, there we go. So we've got, you know, it's in the midst of provisioning, you see this third node right here, it says I don't really know anything about that yet and that's because it's still provisioning, right? And so what I'm going to do is I'm just going to use Docker Machine to connect to the master and what this will do is now what has happened, it's kind of hard to see because of my colors, but I've now set the Swarm Master as my active Docker host and so I can run Docker commands like Docker PS and Docker Images, for example, and it will go again, it just transparently talked to the Swarm Master and so what I want to show you is if I run a Docker Info, you'll see right now this is the output of the current Swarm cluster, you see we've got three containers, three of them are running two images, we've got two nodes, master and node one, two CPUs, a certain amount of memory, so on and so forth, right? Once this other piece is done, and it ends up in running, then we'll actually see that this will increase to three. So if I go back over here, nope, still running, it takes forever. All right, so let's let that run and we'll come back to it in a moment. Any questions before we move on about either Trevor's piece which is finished up, we can go back and look at that if you want to. Yeah, let's do that. Or about Docker Machine. Questions so far? Everybody cool? All right, so can we get Flip back over to the issue of my input and let's let them see what's going on there. Okay, there you go. Okay, so we can see here that my web server has finished provisioning. I saw that my Ansible Provisioner ran against it and it removed any web content, any stale content that might be there and then it went ahead and pulled down my latest web application from Git. So I can see here that my master instance has already been changed and I can go ahead and actually get the IP address and let's go to the web server and see this wonderful web application that I spent so much time and preparation on for time for the OpenStack Summit. So I go ahead and I put my application in and look at that beautiful user interface with all these static images but just underscoring that, again, it's very simple to use the development tools with OpenStack, it's just a matter of making sure that your developers feel comfortable with the environment and know, hey, there's plugins out there, you don't have to keep on using other infrastructures, use the OpenStack cloud that you've been spending so much time and effort trying to get up and running. So that would be it for my section of the talk. So when they're done with that, how do they clean it up? Oh, it's very important to know. Okay, so Vagrant is not just great on bringing up infrastructure, it's also good at tearing it down. So I can actually do a Vagrant status. I can go out and query the OpenStack cloud through the APIs as to what instances Vagrant actually has up and running. Yeah, this network is really doing wonders for my demo time. So it's gonna go out and say your instance is still active and running. So at this point, I can actually, oh, let's try SSH into it, let's live dangerously. So I can go ahead and try and SSH into it and I don't have to actually type in SSH Ubuntu at the username because your image access or your instance access will be automatically managed with the Vagrant tools. So I'm logged into my instance, it already knows to use the Ubuntu username, it knows the IP address because I have a floating IP attached to it and it's saved to the Vagrant settings. Once you bring up an instance, it saves all the metadata about that instance locally. So I can, you know, do whatever I need to do locally on the instance, but now it's time to say goodbye. So we'll go ahead and, it's very dramatic, Vagrant destroy. So I'm gonna go ahead and destroy my instance. It's gonna go ahead and make sure that I have the credentials again and then it's gonna remove the instance. And it's, you know, sad panda face, I'm losing my, losing my instance. So I go back and I go into my list of instances and it should be deleting. So very simple, total lifecycle management of your development, test environment through whatever tool that you use and in this case, it's Vagrant. Okay. So, cool. Let's turn it back over to the other laptop guys. Yep, let's flip back over to my laptop and we'll see how things are going here. Oh, still taking forever. Okay, that's fine, we'll just move on. We'll come back to it in a moment. All right, so last thing we wanted to show you guys was this tool called Terraform, right? So I've got the Docker Machine demo running, it'll come back, it takes forever unfortunately but it will come back and we'll look at that in just a moment but while that's running, let's look at Terraform. So you're probably all familiar with the idea of Opusack Heat and Heat Templates and the ability for you to define what in Opusack terms they call a stack which would be a combination of resources, it might be, you know, compute instances, it might be neutron networks, it might be neutron routers and subnets and ports and a variety of other things, right? It's a really, really handy tool but it is constrained naturally to running with just OpenStack because it is an OpenStack project. What Terraform is, is a third-party tool from Hashicorp, same company that did Vagrant and Packer, right? And it's designed to do the same sort of thing to give you an infrastructure as code approach to defining what you want to create but it's not limited to just one backend so you could actually use it against AWS, you could use it against OpenStack, you could use it against DigitalOcean, et cetera, et cetera. There's a whole list of providers such as infrastructure as a service providers, it's also cloud services. So if you needed to, for example, create a DNS entry in a cloud-hosted DNS service, they have providers for that or if you needed to update your Cloudflare content distribution network, they have a provider for that. So it's designed to give you the ability to define what they call a configuration, a very, you know, imaginative term there, a configuration and that configuration may encompass multiple providers in a single configuration. Now, we're going to show you a configuration that's just OpenStack, so to be honest, we could have used heat for that, but what we want to do is show you how you would do that using Terraform such that you have an idea of what it looks like, right? The templates, so the configurations that you create using Terraform, you write them either in JSON format or you use this, I don't want to say proprietary, but this specific configuration tool or language called HCL which is the HashiCorp configuration language and that is not quite YAML, not quite Tommel, not quite JSON, kind of an interesting mismatch of a bunch of them, okay? And then once you have that configuration, then you would use a series of commands to enforce that configuration upon the underlying infrastructure to create what it is that you're trying to create, okay? Once you kind of create instances, for example, then at that point we can bring in any of the other tools that we wanted to use, so when I create instances on and opens that cloud, for example, I can apply cloud init or cloud config to them, which is what I'll do in this particular case, but we could just as well have used an Ansible Playbook to then run Ansible plays against these instances as well. So you're not locked into just Terraform, this is just a way to spin up the infrastructure. The nice thing about Terraform, as opposed to the other tools we've seen, both Vagrant and Docker Machine, they're not capable of creating networks and routers and subnets and all that kind of stuff. They only provision instances, right? So what we showed you is we had pre-created security groups that allowed the necessary traffic. We had a neutron network to which we were attaching. We had a router with an uplink. All that stuff was already done. And in this case with Terraform, we can actually create all of that on the fly. So you're able to create the neutron networks that the instances want or need. You're able to create the routers that you need. You're able to attach the interfaces. All of that done on the fly, right? So let's... Yeah, okay. So let's take a look at that. It just takes so long. Anyway, let's take a look at what that looks like, okay? So I'm going to go over here. Oh, that's right, I'm already in the directory. Okay, so what I have here is a set of files and I'll pull these up so you can look at them. The extension indicates that this is a Terraform configuration. And I'm using the HCL language, the Hashtagroup configuration language. Rather than JSON, it's a little more readable than standard JSON, for me at least. But if you're more comfortable with standard JSON, you could do that as well. You'll see there's a few different files here. We have a vars file. This is where we're going to put any variables. And so in this case, this is where we put image IDs, network IDs, stuff that's specific to the particular opens that cloud against which I'm operating, right? And then if you needed to port that configuration to a different opens that cloud, you could just go into the variables file, change the specifics for the cloud against which you want it to run. And then in theory, everything should port over and you should be fine, right? Then we've got the main file which indicates this is where we're actually going to go and create all the various things, right? And then we've got an output file which will kind of throw some stuff out at you. And then in this case, I also have a shell script that does kind of the follow on and sets up the swarm cluster for us, okay? So let's take a look at the main file here. Okay, let me see if I can increase this font size for you guys a little bit. You should be able to do command plus too on that one. Is it command plus? Is it? Oh, okay, there we go. You can tell I don't usually change my font size when I'm in Sublime. All right, so here's what it looks like. And I'll scroll through there, but we basically have these definitions and all this stuff is documented on the Terraform site, but we're going to create a network first. We're then going to create a subnet. We're going to kind of reference back the network that we created, so that's the way we can refer to an object that we didn't know about in advance like I'm creating the network and when I create a subnet, I need to tell the subnet this is the network with which you're associated, so I just use this reference to refer back to the one I just created. We associate some information. I create a router. We pull some information from the variables file right here. You can see me pulling the ID that refers to the external gateway, the external network that the router is attaching. Then I attach some interfaces to the router. I assign some floating IPs. All right, and then we go through them. We create some instances and in this case, again, we're pulling variables out to say which image ID we're using, which flavor we're using, which key pair we injecting into it. And then you'll note right at the end here that I am applying a cloud init or a user data script to these files to customize them and do what we want them to do. And there's five instances, and each instance gets its own floating IP and all that, right? So, all right, so hey, the Docker machine demo's finally done. Woo. All right, and so what I'm going to do is I'm going to go ahead and kick this off, this piece, and then I'll show you the Docker machine swarm cluster that we created has increased from two nodes to three. But what you do is you use the command Terraform with a set of subcommands. And so if I just run Terraform, you can see there's a whole set of subcommands underneath it. And in this case, we'll first go with the Terraform plan. And what Terraform plan does is it says go look against the infrastructure that you're configured to use, right? And while I'm talking, I'm going to do something that I forgot to do last time. Okay, so go look against the infrastructure and tell me what it is you're going to have to do to make the infrastructure look like what I've asked you to look like, right? So lay out a plan for how you're going to do that. And so when I do a Terraform plan, it's going to prompt me for traffic. And it was really very fast. I don't know if you guys managed to catch that. But basically, it goes through and it finds out are there any instances out there that I already know about or the network's out there, whatever. It basically goes out and looks at the infrastructure and says, okay, there are 14 things that I need to add, none that I need to change and none that I need to destroy. And so it's telling me, when you go and run apply, this is what I'm going to do, right? So if I run Terraform apply, then you're going to see it start spinning up instances, right? It's creating a router, creating the network, so on and so forth. Well, that's happening and we'll come back to that. We'll come back over here. You can see that my Docker Machine demo is finished. It's provisioned another node. If I run... This is just to set my Docker commands. If I run Docker Info again, this time you'll see the easiest way to look right here is that the CPU's has increased to three, whereas it was two. So now we have three nodes. We see the master, we see node one, and we see node two. So we've provisioned a Docker swarm cluster using Docker Machine. We didn't have to know any of the commands to run the swarm containers, all that. It does all that for us, right? So if we run Docker PS, then it doesn't show us anything, but that's okay. It shows the pieces where I joined the cluster and all that, right? So you can see all the containers are there and then I could use Docker Machine RM to just remove the instances that we created, right? So I see that I have a set of instances here. It'll now show me three, okay? You see the master and then you see the two nodes and I could use a Docker Machine RM. And in that case, it would go and then remove the instance, delete the private key, which in this case was an auto-generated key with pre-existing key, because remember that thing I told you about. And then clean up after itself and I'll show you that in just a moment. But let's look at our Terraform demo. All right, so Terraform's still moving along here. I'll just scroll back so you can see all the things that it's been creating, right? So it's been launching instances, creating, floating IPs, all of that, okay? Now it says, okay, I'm done. And then I've output one of the IP addresses. The output file's actually supposed to spit out five addresses, but I'm not an expert coder and I can only get it spit out one, so there you go. But yeah, it's okay. What we can do is we can go over to our OpenStack console and if I refresh in here now, maybe I should make that just a tad smaller. You guys can still see that, okay? Not really? Okay, there we go. So you'll see now I have a set of five nodes here that did not exist before, right? Before we just had the Docker machine pieces, master node 102, and now I have a whole set of CoreOS instances that are running and I chose CoreOS just to save me the trouble of provisioning Docker on there. And so these have all been customized according to the cloud init file that I supplied and what that cloud init file did and which means I forgot a step and things are going to break, but that's okay. What it was supposed to do was go ahead and configure SED2 to create a new cluster and then it would have registered all the swarm machines against the cluster. I forgot to change the discovery URL, so that's going to break and it'll have stale IPs and all that. But I'll show you what the cloud config file looks like and you guys can do this in your own environments. And so it's all done and now I have all the stuff here. It shows me that I added 14 pieces. I didn't change it. I didn't destroy any. And at this point then, you've got the entire infrastructure created. So if I flip back over to OpenStack, I can show you where it created the network that did not exist before. So it created a network called SwarmNet with an associated subnet. It created a router called SwarmRouter and attached it to the uplink. And then you saw already that it launched all the instances, all the coroess instances associated with floating IP to each of those, right? And if I were to then right now to go in and do... I'll show you what this looks like. So this is just a simple shell script that I wrote. You put in the appropriate IPs, the floating IPs that were assigned to the instances that you created and then it would go through and run the necessary commands to pull down the swarm container, run the swarm join commands, join the thing and it would turn up an entire swarm cluster. Now in this case again, my mistake, I didn't change my discovery URL. So it wouldn't work. The cloud config looks like this. This is pretty standard cloud config stuff. It's... coroess does a pretty good job of documenting what they're doing on the back end, what the kind of support they've added. So we stand up at CD2, an CD2 cluster, then we tell it to start and then we modify the Docker engine so that it listens on a TCP socket, which is not the default for coroess. And so that was automatically applied to all those instances as they ran, and so basically with that one command, we were able to spin up the entire piece. So I'll stop there. Open up for any questions that you guys may have on any of the things that we've shown thus far, and I have no idea how we're doing on time. What you, Trevor, you know how we're doing on time? Let me see. What are we running until 10 after? Yeah, we're right up on the... Right up on time. Okay, well, we probably have time for a couple of quick questions if you guys have any, or feel free to catch us afterwards. We'll step out so the presenters for the next session have time to get set up. Any questions or anything we've shown you guys? You guys are absolutely spellbound. Well, if you guys want to try any of the code that you've seen here today, there's the repo up there, and please feel free to reach out to myself and Scott. We're both on Twitter and various social media outlets, so everybody get the cameras out and take the pictures, and thank you for attending the talk. All right, thanks, guys.