 Hello. Thanks for joining. So this session is on using Terraform with OpenStack. This isn't going to be an interactive session. This isn't going to be a workshop or anything. This is going to be me showing you guys what Terraform is. My goal by the end of the session is for you guys to be able to leave this session, download Terraform, and use the docs more as a reference rather than an instruction guide. So the agenda for today, I'm not going to be covering what is Cloud or what is OpenStack. I will be covering what Terraform is, how to use Terraform in OpenStack, and how the OpenStack support works in Terraform. So who am I? My name is Joe Topchin. I'm from Calgary, Alberta, which is in Canada. I work for an organization called Cibera. Cibera is an NREN, a national research and education network provider. It's similar to Cynet in Japan, ARNET in Australia, Internet2 in the States, Switch in Switzerland, Surfnet in the Netherlands. So if you're familiar with any of those types of organizations, Cibera is similar to those. This is just a quick map of all the other organizations that make up the Canadian NREN, and I'm the red arrow there. I wanted to point out I don't actually work for Hashicorp. I have no relation to Hashicorp. I helped add the OpenStack support to Terraform earlier this year. And the reason I'm giving this talk is because I think Terraform is a great tool. It's allowed me to do a lot of pretty cool things, and I wanted to share that. So with that, Terraform. Terraform is a command line tool. It's a client-side tool. It allows you to declaratively create cloud resources. If you're familiar with Puppet, it's kind of like Puppet. If you're familiar with Vagrant, it's kind of like Vagrant. And if you're familiar with Heat, it's kind of like Heat, so it's a really interesting combination of those three types of tools. As a command line tool, it's written in Go. There are pre-compiled binaries available for Mac, Linux, Windows, FreeBSD, and OpenBSD. And if for some reason you're running a different platform, you can compile the Go code and run it on that. Also because it's Go, it has no dependencies or prerequisites. It comes as a single binary, and it's distributed as a zip file. So Terraform allows you to create cloud resources. You declare those resources in a source file. Terraform supports multiple cloud providers and services. And your infrastructure declaration can use those multiple cloud providers and services. So you don't have to work on one cloud exclusively. You can intermix different cloud providers that Terraform has support for. For example, you can have an OpenStack instance that registers DNS records with Amazon Route 53. As an example of a full deployment, let's say you wanted to create a large mail infrastructure, you can register the DNS MX records with, say, Cloudflare. You can have a data analytics system on AWS for spam filtering. And then you can have the actual mail storage on OpenStack with instances attached to block storage volumes. Another quick example, if you have a web infrastructure that spans multiple OpenStack clouds, you can have DNS records with AWS. Each OpenStack cloud, you can have a three-tier web stack with a front-end load balancer and application tier and a database tier. And then Terraform can orchestrate the deployment and provisioning of each of those web stacks in each of the OpenStack clouds. So I mentioned that, and I should also mention that there's gonna be a lot of code in this presentation. I've tried my best to make it look presentable. I apologize that there's gonna be a lot of text. So when I mentioned declarative infrastructure, as an example of that, the first two lines is just reading a Krontab file. And the output of that Krontab file has a structured data form. And if you're not familiar with a Krontab format, you really wouldn't have any idea what that means, what the specific fields stand for. The second example is from Puppet. If you're familiar with Puppet's declarative language, it's kind of readable. If you had no idea what the top two lines did, you'd have a better idea of what the bottom example does. So that's one part of declarative infrastructure. The second part is that you don't really need to know the back-end implementation. You don't need to know how the Krontab is implemented. Whereas the top example, you have to understand that the Krontab goes in the varsable Krontabs root file. With the second, you don't need to know that to declare it. You just sort of declare it. Terraform takes a similar approach. The declarative language looks pretty similar to Puppet. If this is the first time you've seen Terraform declaration, you might not understand exactly what the syntax is, but just from reading it, you can kind of understand a little bit of what it's doing. In this case, it's creating two cloud resources, a floating IP and a block storage volume. The floating IP comes from the public pool. The block storage volume has a description of first test volume and a size of three. And in this case, three would be three gigabytes. Compared to Puppet, Terraform resources are at the infrastructure level rather than the system level. Puppet, you're more or less working with packages, users, files, things like that. Terraform, you're working with whole systems like that, such as instances, block storage volumes, floating IPs. When you install Puppet on a system, you can use it to inventory the system. You can see what packages have been installed, what users have been installed. Terraform can't do resource discovery yet. If you were to use Terraform and supply your cloud credentials, it's unable to inventory what you had existing. It's able to report on the instances that you use with Terraform, but no other discovery yet. And it is technically possible to write custom types and providers with Puppet if you're familiar with it to kind of do the same thing, but for the sake of this discussion, we'll kind of ignore that use case. Compared to Vagrant, Terraform can provision more than just virtual machines. With Vagrant, you provision virtual machines in a source file. You can specify multiple virtual machines. Terraform can specify multiple virtual machines as well, but more than virtual machines. With Vagrant, even though it can specify multiple virtual machines, it can't really relate those virtual machines together. Terraform can provide a relationship between not only virtual machines, but other resources. And compared to Heat, Terraform is a command line tool, not really a service, and Terraform can operate on more than just open stack clouds. So what resources are available in Terraform for open stack? On the compute side of things, Terraform supports instances, floating IPs, key pairs, security groups, and server groups. For block storage, it supports a volume. For the network side, Neutron, it supports networks, subnets, routers, router interfaces, and floating IPs. And if you use Nova Network, Terraform does support that as well. It's just sort of taken care of implicitly through the instance resource. Load balancing is a service. It supports monitors, pools, and virtual IPs. Firewall is a service that has firewalls, policies, and rules. And finally, for object storage, it supports containers. So how do you use Terraform? As I said, it's a Go-based application, so all you really need to worry about is if your workstation is able to read zip files. Once you have that down, you download the zip file. You can either run the binaries directly from that directory, or you can move them to somewhere on your path. Once that's done, just run the Terraform command, make sure it's executing properly, and that last example is just sort of echoing what version of Terraform it has. Once you have it installed, you want to create a Terraform configuration, and that's just done through a plain text file. Terraform configurations are added to a file within an extension of .tf, and any text editor is really good for that. So inside the declaration, inside this main .tf file, you want to start declaring your resources. And this is a simple example that just declares three resources. And again, if this is the first time you've seen Terraform configuration, you might not understand the exact syntax, but you can sort of figure out what's going on. In this case, we're creating three different resources. Sorry for the different lines there, but highlighting the three resources, there's a key pair called myKey, floating IP just called f, and a compute resource called myVM. And you can see for the key pair and the instance, there's also a name attribute as well, and there's different variations, or there's different reasons why you'd want to use that, but for now, we'll just stick to the name in the highlighted area. So the third resource, the instance resource, is referencing the first two resources that were made. So the instance is referencing the key pair and the floating IP, and it's using that using a variable notation. And this is one way that you can relate resources together. You're not actually specifying the static name of them, but you're referencing it through a variable. And Terraform will look up that information when it's ready. And because of this, Terraform is going to create an implicit relationship between the three resources. So before it creates the virtual machine, it knows that it first has to create the key pair and it first has to create the floating IP. And so it'll do those first, and if there's no relationship to any of those resources, it'll create them in parallel. And then once those resources have been confirmed to be created, it'll go ahead and create the instance. So once you have your source declaration, your resource is declared in a source file. Terraform has a sub-command called plan. Plan will just print out what exactly Terraform plans to do, and this has just been shortened to the instance. And you can see that there's a lot of computed fields. Those computed fields aren't really known yet. We didn't specify a V4 address or a V6 address. And as well, it's also referencing the floating IP, which we don't know yet either. The key pair in this case is being referenced because we gave it a name of my key and there's really not much else to know about it. We gave it a flavor name and a flavor, I'm sorry, an image name, but we don't actually know the IDs of those, but Terraform will resolve those for us later on. Once your plan looks good, you can go ahead and apply, and Terraform has a sub-command called apply. And here you can see that the key and the floating IP were created first and that the instance was created after those have been successful. Once everything's been created, Terraform has a show command, and it can show you the final result of what was created. Again, this is just showing the output of the instance. And highlighting those areas, again, you can see that those computed fields have now been resolved. So we now have definitive values for those, which you can then use either as reporting or you can use them for sub-commands that you can run or to relate them to other resources. When everything's done, if you wanted to tear everything down, Terraform has a destroy command and it'll go ahead and just clear everything up that was declared in your file. So that was creating and destroying resources. Terraform is also able to provision resources. Provisioning basically means that it will connect to the resource in some manner and provision it when it's applicable. For example, it really doesn't make sense to provision a floating IP address because there's not much you can do with it. But for instances, it makes a lot of sense. The components that make up provisioning, there's a connection block and this basically just tells you how to, or this basically tells Terraform how to connect to the instance. There's a file provisioner, which will copy one or more files to the resource. Local execution will run commands locally on your workstation and remote execution will run commands remotely on the resource itself. There's also a Chef provisioner and that's not because HashiCorp supports Chef in any way, it's just because it was a community, contributed provisioner. If anybody wanted to see Ansible or Puppet or anything like that in Terraform, they're more than welcome to submit the code to it. So the connection block. Any provisioner is done inside inline the resource, inline with the resource and you can see that the connection here is just specifying a custom username to log in and it's gonna be with SSH. It's specifying the location of a key file and it's specifying an IP address that it should connect to. In this case, the IP address is in that variable notation again, so it's not gonna know how to connect to it until the address is resolved. File resource. It works really, it works just like R-Sync, so you would specify a source in a destination. The source could be a single file or it could be a directory. If it's a directory, it'll copy the entire directory over and if it's just a single file, it'll just copy those individual files over. Local exec. These run commands locally on your workstation. This example here is just adding an entry to your Etsy host file. So it's just a simple example of something like that. And finally, remote exec. Remote exec runs commands remotely on the resource. In this case, it's running a series of commands, app get update and app get install Apache 2. You're able to use multiple provisioners per resource. You can mix them whether you want remote exec or local exec. You can also have multiple remote exec and multiple local exec, multiple file blocks as well, so you're not restricted to either just one or one of each. So those are the basics of Terraform and now I'll just go over some examples of using Terraform with OpenStack in some of the more advanced ways you can use it. So this block here, or this configuration here is creating a block storage volume and it's also creating an instance. And it's attaching the block storage volume to the instance and you can see how it's done here. The top section is declaring that block volume and the second highlighted section is referencing the ID of that created volume and attaching it. Once this is all done, if you were to run Terraform show, you can see that there's some reporting done on it. You can see the device letter that it was attached as and you can see the volume ID as well as the attachment ID, which nine times I'd attend are the same thing, but they could actually be different. You can also have multiple resources of the same kind. So all of the examples previously had just created one instance, one block storage volume, but it is possible to create multiple instances of each one of those resources and you can declare variables in Terraform. This one is declaring a variable called count. You don't have to name it count, but just sort of the standard is to name it count. Here we're getting a default value of four. When you run Terraform, Terraform will prompt you and ask you if the default value is okay or if you wanted to use something different. And let's say we did choose four. In this case, four block storage volumes would be created and that's done through the count attribute. The next attribute here is using a built-in Terraform function called format and it works very similar to printf and other functions like that. So what we're doing is we're using the count index and we're generating a name GlusterFS-countindex and so it's gonna create block storage volumes with a name GlusterFS01, GlusterFS02, GlusterFS03 and 04. And this is why each resource has a name but there's also a separate name attribute as well so you can do custom things like this. You can always reference the group of resources by the resource name but if you wanna reference the individual block storage volumes, you would use the name attribute. Similar for the instance, we're creating four different instances. We're giving it a name the same as the volume but we're attaching each of the volumes to each of the instances. So in this case, it's gonna attach the first block storage volume to the first instance. It's gonna attach the second block storage volume to the second instance and so on. So the end result of this is it's gonna have four different block storage volumes in four instances and each of those instances is gonna have a block storage volume attached to it. Terraform can use multiple providers so multiple cloud providers. In this case, we're creating a resource, a virtual machine or an instance called console. We're gonna create four of them. We're giving it a name very similar to how the past example was done. It's gonna have console dash and then the index but the second resource is an Amazon AWS Route 53 record and it's actually only gonna create one of them because we're not using the account attribute but what it is doing is it's collecting the IPv6 addresses of the four instances and it's adding them to the single DNS record. So again, it's using a built-in function called replace and it's just stripping out the brackets. If you're familiar with IPv6 addresses, sometimes they have bracket notations and sometimes not but it's a pretty slick thing to do and if this was actually created, you can run a DNS utility and see that all four examples or all four IP addresses were in fact created. So how does Terraform work or how does OpenStack work in Terraform? Once again, Terraform is a Go-based application and since Terraform is based in Go in order to add OpenStack support needed in a Go-based OpenStack library. In this case, it uses Go for Cloud. Go for Cloud is a project that came out of Rackspace. It's still hosted under Rackspace's namespace. Terraform includes everything that Go for Cloud supports. So unfortunately, Go for Cloud doesn't support every different feature and every different project in OpenStack. It only has support currently for a subset of those. So you've heard me mention things like Cloudflare DNS or Amazon Route 53 DNS. The reason I haven't mentioned Designate is simply because it doesn't have support for Designate. Similarly, I haven't mentioned anything about Glantz because there isn't Glantz support at this time in Go for Cloud. Go for Cloud does support heat but Terraform doesn't support heat because that would be a little bit weird. So that's the one difference between the two. If you're interested in digging into the actual code to see how it works or if you wanted to add further support to Terraform, it's actually really easy. And before I was working with this, I had no clue how to write anything in Go but Go is a pretty simple language and it's done cleanly in Terraform which makes things much easier. So if you were to look into the code base, you would dig down into the built-in provider's OpenStack directory in which you would see all of the resources that are available. And if you were to open one of those resources up, you'd see a file that begins like this. Each of the resources is broken down into four basic functions, Create, Read, Update, and Delete. And if you've built any type of web app or any type of REST API, you're very familiar with these four different types of functions. And the fact that Terraform has broken down all resources into these four basic functions make them very easy to work with. In this case, this is the floating IP resource and you really can't update a floating IP resource so it's not applicable here but the other functions are. And digging down even further, this is an example of the floating IP Create function and you can see it's not very complicated at all. It's maybe 20 lines of code and all it's doing is just using Go for Cloud to provision a floating IP address. This is about 20 lines of code. The entire floating IP resource is probably 120 lines of code so it's very simple. That's the easy example. The most complex example would be the instance resource which is about 1,100 lines of code so it can get a little bit complex but for the most part it's actually pretty simple. And with that, that was fairly quick and simple. Are there any questions about this? Yes? Right, yeah, there are flags where you can limit how many are done in parallel. By default, Go is gonna use how many cores are on your CPU and unless those resources are restricted by a related resource that still needs to be created it will use that number and do them in parallel. So there is a flag to also modify the amount in parallel but by default it's gonna use as many cores on your CPU. Right, so terraform.io, which I think I just noticed I didn't actually have the URL in this presentation but the homepage is terraform.io and there's a doc section and it has the complete reference to all the providers and their attributes. Yes? Yes? One thing I didn't show because I wanted to keep this fairly simple is that there is a separate resource called a provider block and you can specify multiple provider blocks and a configuration and that's where you would have the configuration for your different clouds. So it's pretty easy to do it that way. Yes? Sure, I'm sorry? Yeah, I'm sure that's really possible to do. I think terraform would just error out before it actually tries to do something like that. The resource graph in terraform is actually really good so I think it would just tell you up front that there's a circular dependency and it wouldn't proceed further. Yes? It is when it makes sense and by that I mean if you were to update your terraform configuration with a different image ID it's gonna regenerate the instance with another image but if you were to just update the name it would simply just update the name. So when it makes sense it is item hit. Yes? So I haven't actually used the Chef Provisioner but I have used several instances of the remote exec and things like that and also mixing it with clouded it. Clouded it will always run first and then once SSH is available then the actual other provisioners will run. So I have played around with that and if you have multiple provisioners they will run in a top-down format and there's no way as far as I know to order them so if you have multiple remote exec blocks they'll just go in a top-down fashion. Did that answer your question? Yeah. Yes? Windows is possible and HashiCorp in terraform, I'm sorry, there are specific provisioners for Windows. I think it uses when SH but rather than SSH it's gonna use something more Windows native. No, and that's actually a common request. You're gonna have a lot of repeated code and boilerplate code and unfortunately it's not possible at this time to break provisioners out either into their own resources or to make global attributes that would then be run on each of the instances. So in that case you're gonna have a lot of repeated provisioning code unfortunately. Absolutely, yeah. Provisioning, I'm pretty sure is gonna end up being like a first-class resource in terraform. Any other questions? Yes? Yeah, right, this one how? Yeah, terraform has a concept called modules and then you can create a standard configuration, put it in a directory that would be a module and then you would interface with that module using variables. So if you wanted to have say a console module for example and you just wanted to have the user specify say the image ID and how many you want, then only those two variables would be exposed to the module and then you can reference that module in your other configuration. So first it would run the module and then it would run everything else. So there is a way to sort of package configurations up and reuse them. Yep, it will do that, yeah. Since it's client-side application though there's no way to auto-scale you would manually rerun the command but yeah it will either update, either increase or decrease the amount that you have running. That's a good question. I guess since they're identical it doesn't really matter but honestly I'm not sure. That would be, that's a good guess. That is correct, yes. They would have to be two separate blocks, one for each cloud. Let me see if I can get to the example here. Right, so here this is, this would just deploy them into one cloud and if you wanted to reference them you'd have to specifically mention the provider that you were using so that links to the certain cloud and you would have two different blocks of groups. So the provider becomes part of each resource? Yeah, exactly, yeah. So it's one provider? One provider stands, yes, yes. Mm-hmm, yes. So that's an interesting question and I'll be totally unbiased. I've submitted a lot of code to GoforCloud and they've really helped me learn Go and they've reviewed my code and I think they're a fantastic team but unfortunately I've seen the development slow down within the past six months significantly and GoforCloud does have a lot of open pull requests at this moment and I'm not sure exactly why it slowed down but it has slowed down. It's still actively developed but they haven't been accepting patches as quickly as say six months ago because of the Go language. So GoforCloud is basically the most full-fledged open-stack library written in Go. So it's a Go binding to... Exactly, yep, yep. Any other questions? All right, well thank you guys very much. Thank you. Thank you.