 Thank you for taking the time to hang out with me, and I hope I will I have the lights not in my face yet I could look at you hope we can have some fun today Maybe learn a thing or two, and I hope you're sitting nice and comfortable. Let's pour another drink. Let's get going Let's get going. I have to admit I'm very very very sick of the cloud not necessarily the ideas that they represent or the promises of the cloud But just the terms because I do a lot of presentations about cloud Also to non-technical people and then I have to go through all the fluff I have to talk about sass and pass and I s and about public and private and hybrid and all this marketing mumbo-jumbo And I'm very very much sick of all the fluff that is being told now There's this one picture I always use and I because of lack of time I won't go to the entire game the entire spiel But I usually ask people what's on that picture and they always tell I see a plug and I see a socket But in fact you see a person doing vacuum cleaning and what what this entire picture represents is a level of abstraction That is generated everything behind this very wall is none of that person's concern and You're also that metaphor of core business core business at that point is vacuum cleaning Maybe have some visitor visitors in the evening you're organizing a party and you want the house to be spotless You want it to be clean so you do your business, but there's so much technology and so much levels of complexity in that Just the fact of doing vacuum cleaning that we're not aware of we're not aware of how the wires are connected behind the socket and how that is connected to the fuse box and how that is connected to the outside world the grid and What kind of power plants are used to generate that electricity that we use just for the sheer fact of vacuum cleaning We don't know that and that's very much the representation of cloud right you apply a level of abstraction That is what you do and what you gain from that is flexibility and you leverage that flexibility To generate basically operational stability that is the promise of the cloud and to enforce that you need tons of automation And this talk is an automation talk about cloud platforms And you might know that there's many many many cloud platforms out there just to give you an idea These are just a couple of popular ones or popular cloud platforms some with external vendors Some stuff you can do in-house like open stack or VMware or what have you so those are the big ones and every single piece of Technology has its own automation suite its own rules and tools and that makes it a lot more difficult to standardize on and once you pick one You're gonna usually gonna stick with it because the level of efforts You've put into it to make it happen are so enormous and the budget you assigned to it in terms of development R&D setting it up Keeping it maintainable are quite a lot and as a consequence You mostly locked in that to that vendor and that's the sweet spot where most of them want to keep you now I'm gonna talk about a couple of tools primarily about terraform and Also about packer and then use ansible as a sort of utility tool to glue it all together and these tools should help us Build provision Orchestrate vendor agnostic cloud environments. That's the goal. There's some other goals as well So what we want to achieve by the end of this presentation, and I hope you'll agree with me on that is a Let's lock in less vendor looking being in much more of a comfortable position to switch from provider a to provider B Do faster deployments scale better based on dynamic capacity have better reproducibility of our environments by just Cloning our environments using automation to have staging tests Development environment with little effort and a valid copy of our production environment also by doing lots of automation We reduce the human error which is also quite a factor in our industry And this is this is essentially the promise of the cloud But I do not believe in that and people who who hosts stuff in the cloud these days Like lower cost is that really it's not really the flexibility to spend what you want is there But lower cost is not really there But you can have lower cost if you automate more if you have less errors if you have less outages You'll have a level of operational stability and the price you'll have to pay for that is less than if you have to do Manually that I agree with now that being said hi, my name is Tays. That is indeed how you pronounce this I'm a Dutch speaking Belgium. I'm tastefully on Twitter. Please follow me there. This is an experiment I do at every talk I always mention my Twitter handle and every time I go through my Twitter profile after my talk I see the slight bump. It's just slight bump. So let's see if it applies if you have questions afterwards I'll skip Q&A entirely. I'll use the 60 minutes for the talk I'll be hanging around here as well So if you want to have some some questions or get a conversation started get me then or Otherwise get me on Twitter. I'm a technical evangelist at a Belgian web hosting company called Combelle I also work at sentia both of the same group. The one is as a media The other one is enterprise we more or less dominate the market in Belgium The Netherlands and Denmark has nothing to do with the UK. You might have never heard from us. That's perfectly fine I'm also the author of getting started with varnish cash an orally book that was published last year So I'm not just about cloud automation. I like caching and everything in the edge as well I'm glad to be here. I am particularly glad to be in the UK This is my 185th talk of which it's the 17th I'm doing in the UK and one of the best experiences I had was in the UK So I'm really grateful that you took the time out of your schedule to check me out And I hope we can have a good time It's my 11th time in London and my fourth time here at the brewery venue that I love So let's get going If you like or hate this talk Could be either of those could be both Depending, please give me feedback. Give me feedback about the good parts, but also about the bad parts Not just for me, but for organizers as well I would love to do this talk at a future conference somewhere else in the world if you don't agree. Give me Give me some criticism. Maybe tell me how I can prove Ready Yes, okay, let's do it then. We're gonna do three things today We're gonna build we're gonna provision and we're gonna orchestrate that they're all fancy terms Basically, the building part is we're gonna take application code and we're gonna bake it into a virtual machine image It comes with the the hype terms like immutable infrastructure But I won't go that far or be that arrogant to call it all that but in throughout this example throughout this presentation We're gonna take some application code We're gonna put it in a virtual machine image then we're gonna use Packer for that Of course, that's a technology that I'll discuss today then we'll pour some proverbial sauce on it That's the provisioning part will pour some sauce on our a little cake the thing we're baking and that should be the full stack That should contain all the software that is required to run our application code Which is in our VM and once we have that VM Again, we'll use ansible to do that and once we have our VM We need to serve our dish in the clouds and that will be your orchestration part for which we'll use Terraform Loud and clear. Okay. Next steps All of this and that's the beauty it falls under the infrastructure as codes Umbrella, so that means all these configurations that we're using to build the virtual machine image to put it in the cloud our readable Text files so to speak and these could be an essential part of our git repository where our application code is it could also be a separate repository Could be two things depending on how you're organized now our Our mission is to create disposable stacks and reproducible environments and let's do just that and Introduce our very first technology. It's called packer packer is a product that is open source But there's an enterprise suite around it built by the people of hashy corp hashy corp are the people who also invented vagrant Who's used vagrant before yes, see thought you would know that well That's the same company and it's actually an interesting company But they're ahead of their time and they're figuring out Problems that can occur in development in ops in the cloud and figuring out vendor agnostic tools to do so packer Is just a script of a binary written in go and what it does is it interacts with cloud providers and it's able to build VMs From either of those and I think the list is growing by now Maybe the the list isn't up to date yet and they they have new vendors. I will primarily use AWS Amazon EC2 images and open stack those are the two things I use we do a lot of VMWare as well So it's it's quite easy to pick one and it's a single syntax just a configuration file adjacent file in which you define what you need You run the command and it builds it and you have a ready to use image at your cloud provider So let's do it with open stack who uses open stack in here I don't expect to see many hands no one just to prove my point I have an aws example as well who uses aws in here see yeah, okay I expected that but just to show you that there's a lot of similarities in here So this is a JSON file in which I define my configuration You see a building stage which I marked in red and a provisioning stage in the building stage We'll take a source image which you see here. That is the identifier of my source image in my Open stack installation and we'll use that to boot up an image to do all the things we need to do in this Very stupid example I'm installing nginx file an inline shell script and then we need to have some network settings to be able to connect to that machine And in the end we'll turn that into a new image The image is called to taste for in underscore and then some variable interpolation in which I put the current time stamp And that will be my ready to use image based on a typical DB in stretch So that's a DB in 964 bit installation with nginx installed and that will be my machine image It's just an example of course in AWS terms it's very similar just maybe Some more subtle things like you have an AMI name and a source AMI, but the rest is more or less the same I deliberately left my credentials empty. I don't want you be logging on to my Amazon account preferably So but you probably need to have the like for open stack You need to refer to the endpoint the tenant name some key settings Well, you have to do the same thing for Amazon So it's very very simple and you run your command back her builds of that JSON file And you end up with an image that is already registered with your cloud vendor So it's not on your local computer. It's out there ready to use ready to work a straight This is a schematic schematic Representation of it. So what it does that back her build reads your JSON file and calls cloud APIs And it actually boots up a server there So it boots up computing power on which it does all the mumbo-jumbo in this case a shell provisioning script It stops it. It takes a snapshot. It registers it and you're all done And you could use the tools like the CLI tools of open stack And if you do image list, you'll see that image there ready to use readily available Same thing for Amazon if you run command like this You'll see that this VM image is ready to use and then we just have to use ansible to make it Enrich it a bit because that stupid APT get install Mine is why engine x command that won't cut it You'll you'll need a little bit more elaborate Systems to do that and ansible is just the thing who uses ansible in here who does not I would rather know that Would say more than 50% use ansible here. Well, I'm preaching to the choir, but I'll still do it It's a configuration management system. It's very similar to tools like chef and puppet and salt stack And you'll see F engine and other stuff what it primarily does it is responsible for installing Software to configure your machine and to make sure that the full stack is ready so that you have a VM image That is provisioned or not even a VM image You could run it on existing infrastructure in a more traditional setup and make sure that based on playbooks based on sort of recipe You'll have a system that is completely up the spec and the thing I like about ansible is it's very lightweight It's written in Python. There's no agents involved It's just is his age command over the wire and you read YAML files playbooks as they call it and you run those and those contain Instructions on what files should be present what permissions they should have which packages should be installed Which services should run and so on and so forth there's ways to organize that I'll show that in a minute And it's basically installing software and configuring your server again This is not an ansible talk, but I'll show you a pretty basic playbook that I used So what you do is you match your hosts to your inventory and in this case This could be installed on every server in my inventory and I define some variables being this is my web folder or the code Folders that need to be synchronized with my VM image in the end. That's just a variable We do become true which is a sort of pseudo sue kind of approach so that you have root access and then you Execute a set of tasks. This is very descriptive, right? Just YAML based update our APT repository install Vim curl engine X php are sync do some folder stuff You get the message, right? Yeah, is this clear? Let's skip it. Let's let's go to the next one Another thing I'm doing is synchronizing my application based on that variable. So in the end, I'll have an engine X I'll have php my code will be there and I'm ready to rock But in real production environments, you won't keep it that simple You'll organize it in a much more intricate way and I have a tree here Which I can show you how you could typically do that This is a bit of an abstracted version, but it would cover it So in this tree you see a thing called roles and roles are more or less modules encapsulated logic that contain variables that contain templates That contains the full blown Installation for a component that could be engine X could be my SQL could be php could be whatever you want to use But it has lots of settings You can configure you can tweak and you're not exposing yourself to the complexity of it all So you see here that you have some defaults that you can set some handlers to restart some meta information tasks to execute Maybe templates your parse a virtual host system a php Iini file you name it But that's all nicely encapsulated and you don't really need to be exposed to that if you have logic that extends that You could easily run and include custom tasks have custom templates custom files and variables You want to extend you do that either on a host basis or on a group basis with host basis I mean you can match a host and say that host should have X amount of php workers For example and those php workers could be defined in your php role You can extend them at the host level if you group certain servers Let's say you have four web servers and you create a group web then you could define that these workers are set on a group level So basically this is more or less the best practices on how to organize your Ansible stuff and then you can apply it on your virtual machines and in terms of applying them It is a lot cleaner if your playbook looks like this You include a couple of custom tasks and the less the rest is just roles and variables are defined elsewhere If you would run this separately without packer would look a little bit like this Ansible playbook run your playbook everything gets bootstrapped and the end result is that you have a Setup that is completely up to spec and you can run that in tests in production in QA. You'll have identical environments basically But we won't be doing that will be replacing our provisioning part where this used to be the shell execution and just put Ansible in there Register our playbook and we're good to go and in the background Ansible run a very complicated one This is the one I ran on my laptop. This is what Ansible will do for you in the end It will match your inventory dynamically It will take the playbook it will register all kinds of variables and you're good to go So instead of doing the shell script we'll doing we're doing much more advanced stuff and running our customer custom Ansible playbooks In the end We'll add a little bit more magic and we'll add a post processor of the type manifest and a manifest is nothing more than a JSON file Where all the build steps are registered you can use that in your CI CD pipeline to see what has happened So in the end it will store your build results with your artifact data right there in a readable format This is what it looks like I deliberately put an Amazon and an open stack built next to one another you can match the last run ID right here like be 6 000 Well, that's just that one. That's the last one and then you could see the artifact ID and the artifact ID is a given identifier a unique ID That was assigned by your cloud provider. So in this case EF 90 and what have you this is the ID of my freshly baked image at the open stack side And you can use jq Just to parse that use jq as well right Mike brilliant tool command line tool Just to get that artifact ID out of it and here I could prove it if I do open stack image list that very ID That's the one I could use to orchestrate it Now I recorded a video for that because I don't want to bore you sitting here for 20 minutes waking it to making Waking until it gets baked. So what I do I sped it up I use some video editing software to do it rapidly. So we'll do a packer build. I know this fund is rather small I tried my best, but you don't have to read it. You just have to see the movement and the colors Just to get a sense of it. So we're opening it up We're firing off an image we create associating some network settings assigning an SSH key running our ansible script We're stopping the server once the server is fully stopped We're creating a snapshot of it snapshot gets registered as an image and then the end result is that you see here EF 90 AD 1 that's our image created right there and then and then we have to do something with that image And that's the next step and that's the rest of the presentation is using terraform to leverage these kinds of things So I we can use two terms for it We could say we'll deploy our image But I think deployment is way too simple And it doesn't really cover the entire spectrum because deploying an image would just imply we have a VM We give it the image we're done But when you look at it from an open stack in Azure and Amazon and Google Cloud's perspective The term orchestration fits the needs a lot more because it's more than just setting up a server It's actually launching a virtual data center with all these components, and I won't go all over all them You probably know that you need to provide IP space and subnets and routing information and security information all these kinds of things That is rather complicated and they depend vendor per vendor, but it's perfectly feasible There are tools out there in the different Cloud vendor products like cloud formation for AWS There's a heat orchestration suite for open stack resource manager at Azure We won't be using those we'll be using terraform and terraform directly interacts with the API's of Those cloud vendors so it doesn't use or leverage your orchestration tools. It uses the API's directly It's also Infrastructure's code so you read human readable files that contain all the definitions We want vendor agnostic you can plan them in advance and it will give you It will estimate what it needs to do and show you the actions that need to be undertaken You can visualize this in a graph form you can execute those commands. This is also a tool that is written in go It's a single binary. It's also by the people the lovely people of hashy corp You could you have multiple providers you can boot up some stuff with Amazon some stuff with Azure You can have some Google instances and you could in theory make it all work together quite nicely and again It interacts with the API's it does not leverage their orchestration suites and there's tons more providers This is just the list baffling list of tools you can use for the sake of this presentation I'll only use AWS and based on the show of hands and what you guys are using I think it makes much more sense to stick to that So it all starts with a simple tf file if you have a directory where you want to put all these configurations You just put a random tf file. I just chose main dot tf But put whatever put blah dot tf and it will bootstrap it automatically and here's our first scenario code on screen That is what you like right code on the screen We have a set of variables. We can have resources. We can have data sets. We could have outputs There's differences to it But you have to think about it as a way to interact with these API's that's the premise it interacts with API's Variables provide input that we can use in a later stage So we have a variable called AMI here, which if you follow along if you haven't fallen asleep just yet That's that custom VM image that we just baked. We're gonna pass that AMI that ID of our image there and it'll get bootstrapped It has a default value if I don't provide a custom value myself it will use AMI e1 e8 so on and so forth This is the official Debian stretch image It will just launch a blank Debian server if you don't provide a custom in EMI What you can do in Terraform is either have a tf vars file just the variables file Or you define the variables that you want to use or you can use the minus var command line argument and pipe them in Directly into the application once we have that will bootstrap an instance It's of the type AWS instance so Terraform will be clever enough to figure out that that should be an EC2 image And it's called web we refer to it as out as web these names don't really collide because you see I use web here I use web there I use web there those names don't really collide because they're of a different resource type So what we'll be doing here is using our AMI variable either a custom Debian or the stock Debian stretch Or even our custom image and then we'll assign an instance type to it I'm using t2 micro right now. Don't use this in production. This is just the tiniest of tiny For the sake of this preparation of this talk. I didn't want to spend a lot of money So I used the tiny ones. I have a predefined Security group, which is just the firewall settings for web. It opens up 443. It opens up 80 It gives us access throughout HTTP and then we give it some tags AWS has a tagging system if you use the name tag that will help you in your graphical interface to spot it in the name column Once we've done that we want to link the public IP address that is associated with that And link it to a DNS record But we don't have information. I don't know the ID right now of my zone But what I can use is a data format That's a data source and that data source will do a get on the API based on an identifier and that identifier is named So I have an aws.com bell.com zone at Amazon and I'll retrieve it And I could use variable interpolation to get the ID from it. So what I'm doing here is I'm creating a record I'm gonna create an a record an a record the DNS record where I link the IP address to the host name So what I'm doing here is variable interpolation date the aws route 53 zone So right here, it's called web and I want a zone ID So it will fetch that fetch that state Injected there and then we can have a name in convention to it My notes should be called taste within dash web and then the name of the zone So that should be taste within dash web dot aws.com bell.com and we continue on it's an a record It's valid for 60 seconds and then it the TTL gets refreshed That's the theoretical minimum if you want to be nice to the internet increase that a bit If you don't want to be nice to the internet leave it at 60 And then you'll eventually have an IP address link to it and that is of our aws instance Which is also called web grab the public IP and that will do its magic that will create an instance Create an a record or change an a record depending on its existence and in the end it will output those values I want the public IP of my web server and I want to host name of my DNS record That's just our main TF file once we have that we'll run Terraform in it Terraform in it is clever enough to bootstrap those files and to figure out which plugins it needs and as you could see It has figured out that it needs the aws plugin the provider It will fetch that from its repository and it will say hey, I have provider aws here version 1.8 I'm ready to rock and roll You run the planning part and then you could already imagine what is going to happen And it will use state for that it will use Remote the remote state that is out there So it will synchronize with the platform see if anything that we've defined is already there And it will use our data set here to figure out what the zone is and then it will show you what it needs to do And in this case there's only creation part because we had nothing there It will install an web instance and it will display the information that it already has the AMI the fact that we Want a single blog device? We have a security group called web We have a tag called web and we have a DNS record that needs to be created All the rest says compute it and will be filled up in the local state We have in our directory because state will be created and that will be filled up later on based on what the API returns Once you're comfortable doing that you run Terraform apply and it will create the record It will create a server it will link the one and the other one together What I would also advise you to do is if you want to be consistent in your state Once you do the plan you can output the plan to a binary file to blah and it will contain your provider information Your authentication information and the entire state that it needs to execute And if you do Terraform apply blah it will use just that and again I have a video for you explaining how that works how you could leverage that Again heavily sped up We apply it it will create everything most of it is computed the instance is created We're creating a DNS record based on that IP address that we've got and it just created two Resources it didn't have to change anything it did not have to destroy anything and the outputs that are registered our host name and The IP that is associated with that all of this information will be stored in the local state Even my host name and my public IP and if I run Terraform output it will output those stuff from state So you could retrieve it or if you want to use that in an automation script You can json it and it will send it in json format You can register certain variables outputs or input variables as sensitive if there are sensitive They won't be displayed. They'll be used internally, but they'll be encrypted in a way And if we do Terraform show we can actually retrieve the state that was computed So you run Terraform show and all these Parameters that were supposed to be computed are now readable visible ready to use and there's tons of them Too much to care about but our outputs are also registered now. I Were 25 minutes into this presentation. I've mentioned the word state quite a lot And I need to explain how that works in Terraform and how why state is so important and why this is a very color for feature Terraform is the binary it reads your files There's a state backend and there's a cloud API. It synchronizes it with if there's nothing there It will fetch it all from the cloud API stored in state if it sees changes to your TF files It will calculate what it already knows and synchronize it with What is out there and by default state is just a set of files These two files a TF state file and a backup of that file and in essence those files are just json files Nothing more to it. You could open them up and you'll see all this information in json files Now there's also and I would advise you to use that when you run it in a production environment to use remote state Because the state on local files. It's not synchronized right if you work in teams and both of you run the script There's no way to update the state and you'll probably have clashes and locking issues things that go wrong when two People from the same team maybe at different computers run the same script So you have ways to synchronize your state in the cloud you can use console or at CD or GCS You have a bunch of tools throughout and that will be near the end of the presentation I will use console to prove my point and show you how useful that remote state is but for now Let's stick with it and one of the consequences of state is if you don't apply a high availability plan things will go wrong on you This is a Terraform plan where I'm injecting another custom AMI This is an example how you pass on variables to Terraform. I say minus var AMI is that AMI now that one is different So when I do the planning stage There's two things happening Terraform will try to figure out what it needs to do and it will know which API calls all available So you see a tilde update in place that means there's probably a patch or a put call At the Amazon site to do it in line without having to recreate something but since we have changed our AMI and since an AMI and The instance that it's running it's one and the same you can't just change the AMI to change the virtual image on a machine Because that's basically what the machine is Terraform is clever enough to figure out that it first needs to destroy the running instance And then recreate it in order to get to that desired state You see the changes there AMI e1 e to 2 o c that's that's an entirely different machine now luckily Updating the DNS records doesn't require us to delete it. It could be done. You see the tilde here in place When you do the apply of that You'll have a problem because it will be gone. It will say oh, yeah, you want me to change that Let me delete that instance for you will delete it and it will boot it back up But since this is just a stupid DNS record It'll probably be some negative caching and you'll have downtime of 10 minutes and it will be gone That's not something you want So let's make it a bit more intricate a bit more interesting and let's start adding a high availability plan And the tool will be using within Terraform is the concept of workspaces having different workspaces every workspace So you'll have a single Terraform project with multiple workspaces and every workspace has its own state So if you run something if you run a plan you change the workspace you run the plan again It won't try to delete or update anything will just fire up when you an entirely new stack of infrastructure Which is great for blue-green deployments Now in the blue-green green state of mind you'll have that entry point and the entry point could be a load balancer It could be a DNS record it could be an IP address an elastic IP address that is relinked Well, you're probably gonna start with a single workspace in which you're gonna fire up a set of infrastructure And then the next piece the next part you want to have that new deployment ready But you don't want to update this you want to stack it up next to it Do some validation of some sorts and then when you're ready You'll switch it in a sort of atomic way and you could do that in a secondary workspace and here are the commands to do so So you initialize your Terraform project you create a new workspace Let's say it's called workspace one you create a second one you select the first one you plan and apply it will boot up your Infrastructure and then you could do a secondary one change some files change some state plan and apply it again, and you're ready to go and In terms of the file system if you use local state It will create a tf state default or and have state per workspace And then you could switch and run it separately and do changes It would be interesting to name those workspaces according to your pipeline maybe test staging production acceptance so on and so forth and The good thing is that there's also a workspace variable within the Terraform Nayspace that will allow you to maybe interpolate that data into the name of your instance Let's say you have this very same file and we update the name tag and inject the workspace in it If it's in a single account, and you have two workspaces you won't see the difference between them It would be called taste for in web both staging and production or or deployment one deployment two There's no way to see the difference, but if you use the workspaces and inject it there You'll clearly see what is what What we also have done in this is you don't see the DNS part anymore because we've pulled the DNS part out because that's I'll let me go back. That's the atomic switch that need to happen in the end. That's a secondary stage That's a second Terraform project will execute because first we have the existing information We'll bootstrap the second one and that will be a separate endpoint a separate validation process And once we know that works then we want to do the switch and that means that then we want DNS To be executed and what we'll do here is create a variable IP a separate DNS project and Injected in the record so once we're done we run that secondary Terraform project do Terraform reply Pass along the right variable Like this and get it done Just for the fun of it, we're gonna use DNS round robin. Don't use this as a Definitive high availability plan, but it could work So what I want to prove here is not that DNS round robin is a good idea But that you can use multiple instances and that you can link those so what we're doing here is Implementing a count vary account property that says of that instance create two or three or five or ten and The default is to we can extend that via a command line argument or a variables file And then we interpolate the name again web and then we format it Count plus one so that will be web one web two web three web four and we'll also assign the workspace to it And then the output to public IP is no longer a single IP, but is instance web Asterisk so all the web servers and grab their public IP and that will be a list and the only thing to do in Our next stage to keep it a bit secure is to enforce it that the input is also of the type list that way you can't really pass a single IP and Records is also a list so that doesn't need to changing so when you apply that you apply a list Multiply IP addresses and this is just showboating This is just me showing you that you can fork off and and bootstrap multiple instances in a single resource definition That was blue-green. Maybe we should focus on rolling deployments and make it a lot more complicated So instead of switching that entry point atomically why not and this is very much a recap of Mike's presentation about AWS this morning We will we'll have an auto scaling group within that auto scaling group There's a launch configuration that defines what kind of EC2 instances what kind of machines we want and it's just Dynamically adds them so when we do a new deployment will update that launch configuration will create a new launch configuration And Terraform will be clever enough to notify the auto scaling group and say hey relink this new Launch configuration and then gradually these new machines will boot up and the auto scaling group will be clever enough to drain Information drain connections from our previous deployment and gradually use the new ones And that is a combination of using a load balancer and an auto scaling group so the auto scaling group is just one of the tools I used and I have a poor man's Vizio right here, this is just me designing Really poorly designed Schematics, but the goal is to show you that there's lots of information there So we start off let me walk this way with a VPC a virtual private cloud Is that the right? abbreviation in VPC. I'm not an Amazon certified engineer. So lots of acronyms here VPC is just our our own little Private environment that had its own CIDR so its own IP space We assign a set of networks to it to public subnets which are hosted in two different data centers to private subnets Also hosted in those two different data centers We have an application load balancer attached to it an application load balancer has targets These targets are VMs that it distributes it to these are Trown out by the auto scaling group. It uses a launch configuration. We have a custom AMI we have some Cloud cloud watch metrics to figure out. Do we need more capacity? We need less capacity Just a lot of work. I won't show all the details Maybe I could publish this on github later and distribute it to Twitter So you'll have a reason to follow me on Twitter But I'll show you bits and pieces of the configuration just to show you how much you could do with terraform So this is our VPC and it has the subnet or the the CIDR of 10 006 And we give it a name and then we have a subnet that depends on that VPC and that defines another piece of subnet information Which is hosted in EU west 2a. So it's in data center 2a This is also in data center 2a and that's a private IP space And then we'll have probably public IP 2 that is in 2b and private IP 2 and 2b And we have some firewalling information that they opens up 80 and 4 for 3 and 22 and so on and so forth And we have our custom AMI now What's a novelty here is that we're using a data source to retrieve the AMI in my previous example This was just injected by a variable something I pushed in what I'm doing now is I'm looking for AMIs that are owned by user a 2648 That's my own user and we apply a set of filters It needs HVM virtualization as a type and it needs to start with taste-fitting underscore asterisk And we will take the most recent one So every time we push a new one via packer and we run terraform plan It will have figured out that there's a new version of the AMI and it will figure out what needs to happen So it will create a new launch configuration. It will update our rotoscaling group. This is our launch configuration It uses data AWS AMI taste-fitting underscore terraform dot ID So it takes my the ID of my latest image. It pushes it in there. It uses these These security groups have a load balancer that is also in that subnet a listener that listens on port 80 a target group That also listens on port 80 that communicates with a load balancer lots of information right lots of stuff passing The goal is not to teach you how to do this. There's plenty of resources online about this I just want to show you that there's lots of moving parts lots of components that you can orchestrate and eventually I'll get to my point This is the build up We have an autoscaling group It has a minimum size of three servers and at the maximum if it needs the capacity it will have 10 It uses a launch configuration that I previously defined that depended on that data source that had my custom AMI in it It will throw all these instances once it Forks it off it boots them up It will put them in the target group and it'll have some more information here We're using policies to increase or decrease our capacity from our auto scaling group We have a cloud watch metric an alarm a high watermark. So to speak it uses namespacing It uses the AWS application ELB namespace So what we're doing is we're using metrics that Amazon collects from the different components and in this case We'll look at the load balancer and if the load balancer a specific metrics of that the request count per target So if you look at the load balancer positioned here with all your servers there if the amount of connections per target per server Is 20 or more within a period an interval of 60 seconds for my test case This is considered high loads. We need to trigger more servers. So we'll use that and it will have more servers and We do terraform plan terraform apply and all that stuff happens and it dynamically deploys our new instances If there's if you run it again and a new AMI was registered via our packer build It will figure it out. It will create a new launch configuration It will update the auto scaling group the auto scaling group will automatically deploy the new images All the ones will be drained all the ones will be removed and I could show this in a graphical format You could just do terraform graph send it to your dot binary save it as a PNG and that will be it simple, right? very simple So what the what the point is that I'm trying to make is not that Terraform is enormously powerful and it abstracts lots of the complexities because I don't have that much AWS experience But by using terraform it was very easy to learn how it worked But what I would advise you is to store it all organize it nicely According to its role in separate files to keep it simple to keep it readable because all that it looks like JSON But it's not really JSON that all that formatting it'll it'll get tedious ill with confusing So you want to store your provider information there? I deliberately didn't show you provider information because I didn't register my keys or all private information for OpenStack or AWS I didn't register that because I used environment variables that were set But you can also have a provider construct in which you say what the endpoint is where you want to connect to what your Access keys and your secret key you could store that here network information Computation information maybe SSH key pair security groups your outputs your variables You could store that all separately and that's a good way of doing it But there's even a better way and that's by using modules and this is where it gets really interesting Instead of throwing it all in separate files like a networking dot tf key pair dot tf You can create folders per roll Like you'll have an auto scaling folder and we'll have a DNS folder and a launch folder and in that you'll have a sort of Convention a naming convention with a readme file a main dot tf that contains your main logic and outputs dot tf that generates outputs And variables dot tf now the clever part about these things are Variables contain the input outputs are expected to print stuff on on screen But if you work from it from a modular point of view outputs are just like function returns It's just a way to interact with your main logic because in the end all of these modules will be bootstrapped by your main terraform file So outputs are just like return true or return blah Essentially does that and variables are our function inputs So let's let's take it apart. Let's take our DNS module for example. This is just boilerplate Right don't use this don't use any of this. I'm doing this to show you how it works You know our previous construction for that DNS record. Well, what you could do is Define a variable called zone name and preset it. So this now works for aws.com l.com Do you want to do it for your own zone? Well, then set it in your main file later on? This is a an input you have your record name, which is an empty We have a variable records, which is a list and has no default value. So it's required to be extended when we invoke it we have Our route 53 zone, which is our data source and then we'll have some information about the record some interpolation And in the end we'll return the DNS endpoint the frequently the fully qualified domain name FQDN of that record that we created And we'll return that and how do we invoke it? Pretty simple. We have a module DNS We mentioned the folder where it is located and we extend the variables that we want to extend So I left the record name empty So it would be the main zone what we're doing here is creating a WWW record It was a C name record not an a record What is the record you want to connect it to to blah dot domain dot com and you could? Output it to screen and the cool thing about the inpopulation is you don't go to the native DNS record you say module DNS, which is that one take the DNS endpoint and DNS endpoint Was an output I registered so it's just like a function or or a method you You put input in there you return output and all the complexity of what happens behind the curtains is abstracted for you And you can do it that way you'll have a networking part the networking Exposes a VPC that VPC information you can use that in your load balancing or your security groups module Networking security group ID and you can tie them all together and all these pages of logic are now compacted into a single script Reasonably easy to work with But this is just one way of doing it There's also a registry out there out in the open called registry terraform.io This is a screenshot of it and they have all kinds of use cases based on providers You'll find a lots of AWS stuff so that entire VPC thing maybe they have a better implementation I'm pretty sure they have a better implementation and you can use that and you can use that in several ways If you use their registry you can just use module console And you don't need that files on your on those files on your disk You can just refer to the namespace hashy corp and then console AWS You do terraform.init and it will be clever enough to figure it out. It will say I'm checking what we need So we have an empty directory just with that file. So again That's the only thing that is in your main.tf And it will be clever enough to figure out that it needs AWS providers and that it needs that plug-in and We'll just run it and you're done Fairly simple and all the complexity is abstracted away It's not restricted to the terraform registry You can have github repositories or bear just standard git repos and there's more can have local files Terraform registry github bit bucket generic git repos, mercurial, HTTPS tree So there's plenty of ways to distribute and share these modules that you created and that you maintain and are reusable Right still on board Let's go to the final part one of the final parts. That's remote state It's a promise I made to you that in the end we would talk about remote state Let me get some of that water first before we continue Maybe it's a good idea to pour some water for yourself All right apologies Remote state is particularly interesting if you work in teams I explained the difficulties already if you have multiple people working on the same Project you're not 100% sure when or what will be executed at what time If you use local state if you use a local disk and you execute the plan and someone else executes the plan There's no way to synchronize those files in a standard way So you'll end up with a big mess and you can have a risk town time But if you have a centralized location to store your state remotely that will be a lot easier especially when working in teams. I Explained that there's plenty of ways to do it s3 HTTP gets I like console or you can use that CD console is just a key value store an advanced key value storing Sort of cluster format It's also by hashy corp and it's a way to keep track of state to do locking setting it up is Quite easy, but when you want to do it on your laptop for testing purposes This is just a command again. It's also a go binary single go binary. You run it in agent mode you Indicate a location where you'll store all the data files and then you say I want a server You want a graphical interface to do some searching and you want to do it in Dev mode when you do it in Dev mode It won't require a cluster with multiple servers And the only thing you need to do then is provide some TF file Well, you'll where you'll put that construct and say my back end is of the type console This is the address local host port 8500 for example And this is the namespace the path in that key value store where you want to store it If you have multiple people trying the same thing, you'll get a locking error That will cause you so that will prevent from causing you so much trouble This is a good way of making sure that things don't go down But there's another thing you could do you could leverage that and create a data source That uses that state because throughout these examples that I've shown you Communication between different terraform projects happen throughout variables or variable files like a variable file Well, all these variables will locate it or command line arguments where these variables were set This is not always the easiest way and especially if you have if different stages could run on different machines There's no real way to send these IPs over or send these commands over the wire. So what you'll do is Keep track of the state there You you'll you'll have a data source and that DNS record where that IP that public IP used to be sent over the Command line, you could just read it from there. You could just read it from that data source So you have a place where you set the state That is here. We'll probably have that somewhere in your code and then at the other Project you can even rely on local state if you want to and read that remote state And that will fetch all the information you need in this case and output that was registered Which is called public IP and it will just take that and store it in there And you'll have communication over the wire throughout a sort of consensus system basically with console s So instead of using CLI variables use remote state to communicate between the different aspects Maybe the different projects and as a final note Does that mean you can do hybrid cloud? That's one of the questions I often get asked because you could combine Amazon and Azure and you can put Google into it in theory. Yes But we all know that these vendors are optimized only for their own toolkit If you have to go out there data center and go over the public internet to another data center There will be a cost involved the connection speed might not be what you expect it to be So yes, it is entirely possible Maybe you can use an external DNS system that is part of the stack and combine it with some VM Where stuff that is hosted at your company that could work fine But I'm not a hundred percent sure if combining Azure with Amazon and Google All together if that would be faster or cheaper if there are features that are not in your cloud environment of choice Yes, that could work But if you're just using it for the sake of redundancy, I would go for multi availability zones multi regions That was that I would like to thank you for being there if you want to have more information about what I do About the books I write the videos I make the presentation that I give free not to use the place If you just want to follow me on social, those are the spots again joined in give me feedback Help me improve this presentation. I want to travel the world for free. I want to do more presentations help me out If you liked it, please let me know there as well and with that being said thanks for hanging out with me. This is the end