 Hi, everyone. Hi, I'm really loud and excited to talk about Terraform today. So how many people have heard of Terraform before three minutes ago? How many people have heard of Terraform before they read it on the Meetup Group? OK, roughly the same people. Good. So my name's Seth Vargo. I'm at Seth Vargo on Twitter. I work at Hashicorp. I wear a couple of different hats. So you might see me responding to support tickets. You might see me working on open source projects like Terraform, for example. And I also work on some internal tools and some of our commercial products. So depending on what day of the week it is, I'm usually touching a different part of the ecosystem. So if you haven't heard of Terraform, perhaps you've heard of some of the other tools that we work on. Probably the most notable and oldest tool is Vagrant. How many people have used or heard of Vagrant before? Cool. So the last talk talked about VirtualBox for a couple sentences. Vagrant is a command line driver for virtual machines. So VirtualBox, VMware, their support for AWS. Pretty much any virtualized instance, you can control and create a isolated development environment, quickly create and destroy them. Packer, how many people know about Packer? It's like all the same people, this half of the room. Packer was the first Go project. So Vagrant was in Ruby. Packer's in Go. That is a way to build replicated virtual machines. You can build VMware boxes, like ISOs. You can build cloud-based images, kind of the whole stack. So the idea was you could build a Packer image and then throw that into Vagrant and then use that same Packer image to run your production infrastructure. Surf is gossip-based protocols. It's what Console is built on top of, which is service discovery, key value stores are first-class citizen. And then our most recent product to hit the market is Terraform. And I say product, which is a very heavy word because all five of these tools are open source and they have many different contributors, some of whom work at HashiCorp, some of whom work at a variety of different companies from a bunch of different backgrounds. They're all in Go except Vagrant, which is in Ruby. Obviously today we're talking about Terraform. But if you have questions about any of the other tools, feel free to hit me up afterwards. We can talk offline or whatever. So the biggest question is why write Terraform? Why does Terraform exist? What problem is it solving? So anytime you set out to solve some software engineering problem, you have to have a problem to solve. You can't just sit down at a computer and decide that you're gonna write a new thing. What problems base are we filling? What gap are we trying to make easier for our consumers? So one of the questions that we had a lot were how do I provision resources? So we have tools like Puppet, Chef, Ansible, Salt, and they can be configured, they can be managed, but it's kind of a one-to-one relationship. So I have this one server, maybe physical, maybe virtual, maybe a container, and I'm gonna apply Chef to it. I'm gonna apply Puppet to it. But I want something higher than that. I wanna be able to extract that. I wanna be able to describe compute resources the same way that I describe my DNS resources. The same way that I describe my storage resources, my network resources. So it's not just what does this one isolated system look like. I wanna be able to describe my entire infrastructure topology in a shareable way. So furthermore, I described it, I can spin it up, like let's assume that works. I have this hypothetical file that describes my entire infrastructure. How do I manage the life cycle? What does it mean whenever the instance is down? How do I add more instances, remove more instances? How can I associate the relationships between the database and the web application, for example, or between the database and the backend RESTful API? Furthermore, how do I balance all the existing technologies I'm using? So how many people work in some type of hybrid environment? Where you have some cloud-based things, maybe you kind of work in a data center, maybe some of your stuff's on AWS, some of it's in digital ocean, some of it runs in like what I call Closet Cloud, which is like that one server in the corner of the room that you don't touch, cause your boss told you not to. We work in very, very hybrid environments. How can I manage it where half my infrastructure is in digital ocean, half of it's in AWS and the other half is in Closet Cloud and that adds up to three halves. This is how computers work. Furthermore, let's talk about security. How can I enforce policy across all of these machines? So not only can I make sure that only required users, required ports, or required permissions are there, but how can I make sure that this particular machine has this set of services, this set of packages, this set of responsibility and how can I separate those concerns? And finally, very rarely are we in isolated, tiny little boxes. We work in teams. Software engineering is a very collaborative effort. So how can I share these, whatever this mythical file is? How can I share it? How can I version it? How can I store it in something like Git or SVN? How can I collaborate on it, get feedback from my peers, all while realizing that this file is really the description of our production infrastructure, which is kind of a big deal if you think about it. I'm encapsulating the thing that makes me money in one file and a Git diff is gonna tell me whether or not it's okay. So we need something that's incredibly gross, incredibly descriptive and powerful, but why did, like, how did we get here in the first place? So let's talk about the rising data center complexity. 15 to 20 years ago, we had a data center, and the data center was kind of like this one server. Might have been a mainframe, might have been just this really powerful server, and it was kind of run by this one dude, and I don't say that to be sexist, I say that because they were all dudes at the time. And it was kind of a, you know, you pick up the phone and you're like, hey, we need this package installed, and then like six to eight months later, you got your package installed, and I'm not gonna like blame IBM since I'm standing in their office right now, but they may or may not have been one of the companies that provided such a service, but very quickly we realized that we needed more compute resources. One server was no longer cutting it. We needed four servers. We needed our primary database slave and our secondary database slave. We needed, you know, load balancers. We needed the ability to scale and handle more and more traffic. And then we entered kind of like the last five years where technology is rapidly advanced. So the era of virtualization where no longer does one application run on a single physical server, but we have a physical server that may run like anywhere from two to 200 virtual machines, depending on the physical capacity of the machine and the requirements that it has. And then we extrapolated that even further, and now we run containers in our VMs that run in our physical machines that run in our data center. So as you can see, the complexity increases drastically. No longer are we concerned about one physical server that has a very, you know, cute name that we can refer to when talking to our colleagues, but we're talking about highly ephemeral infrastructure, service-oriented infrastructure, microservices, and high fault tolerance, where no longer do we care about whether the server is down. We care about whether 90% of the nodes are functioning. And if 10% is an acceptable loss, then our ELBs can handle that. It can automatically route traffic. So we're no longer in an era where we can think about infrastructure in a way that is binary. And it gets even more complex when we add third-party service providers. We're adding our DNS provider, whether that's something like Simple DNS, or you're using something like Dyn. Maybe you're shoving a CDN in front of it. Maybe you have a database as a service or a caching layer as a service. You're not running your own Redis. You're using Redis Cloud. All of these things add extra complexity. And to communicate with, let's say, Dyn, for example, you can't really use Puppet or Chef to do that. You're making API calls to their service with some authentication tokens. It's not really the place of a configuration management tool to be doing that. And since it's infrastructure-wide, not on a per-system basis, it doesn't make sense for 10 nodes to make API calls to Dyn, just to update a DNS entry. So, furthermore, as I kind of hinted at before, is all of my slides so far have assumed that we have a single data center, but in reality, we have hybrid data centers. So, not only do you have more than one data center, so maybe you're balancing availability across the East Coast, the United States, the West Coast, and Asia, but you may have two physical data centers and a cloud-based virtual data center. So maybe you have one data center sitting in AWS, behind a VPC. You have one sitting in a proprietary data center that you rent space and a rack and bandwidth, and maybe you have another closet cloud somewhere. And it gets even more complex if we realize that these environments are not homogenous. So they're actually a mixture of everything we've talked about so far. So maybe you're an infrastructure provider and you need physical machines because you're providing infrastructure, but maybe you're like a cloud provider and you need virtual machines because you don't really care about providing physical machines. Nobody's running QEMQ on your stuff, so you can provide virtual machines, you can allocate resources, or maybe you're someone that's providing containers as a service, or maybe you're doing all three of them. Your apps run in containers, but you still need a physical machine to perform some really big data number crunching or something that's very IO-intensive. And then maybe you have a whole other data center that's just cloud-based storage, and how do you aggregate all that information and how do you describe that this container here has access to a particular networking resource that this machine should not and that that cloud should have those ports open with that VPC rule and only this container here can talk to this cloud. It's very complicated. It's very complicated for me to even explain to you, let alone try to describe in some type of state file. So this was the problem. Given all of this information, how can you possibly solve the problem of I wanna describe my entire infrastructure in a single file? And we have our asses, the IS, the PS, the CS. And one thing that I didn't talk about was, holy crap, there's these things called Windows and Linux and Mac and Linux, and we have different operating systems and this thing doesn't operate at all like this thing. And this thing here kind of operates like this thing, except this is based on Darwin, which is actually based on BSD, which kind of works like that thing, but doesn't work like Ubuntu and other things. And it's just a total mess working in not only in an environment where you may be working between containers, virtual machines and physical machines, but you're working with different operating systems, different versions of those operating systems, which have different packages available, all of which make it exponentially complicated for the user to configure these things. Ultimately, it's a nightmare. I don't think anyone in this room or anybody who has ever worked in a large infrastructure capacity would say they enjoyed researching different package names across different releases of different operating systems just so they could get something to deploy. Taking all of this, we took a step back and we said, what problem are we trying to solve? What is the goal here? Came up, Terraform kind of has this mission statement, which is to effectively deliver and maintain applications. So at the end of the day, your business unit as a technology company is some type of application. That may be a web app, that may be an iPhone or an Android application, that may be a service that supports some other monetary device. But at the end of the day, you're deploying some type of application or series of applications that are your monetary stream. So we need a way to effectively deliver and maintain the life cycle of those applications. So whether it's the first deployment or the 10,000th deployment, it should be just as easy in the same process. If you break down the application lifestyle, you have develop, deploy, and maintain, right? So the initial development, this happens usually on your personal laptop. Maybe it's a company laptop, but this is kind of like your personal workstation. Maybe you're working in isolation, maybe you're pair programming. But what's important about that environment is that it's consistent, not only among your developers, but it's shareable among your developers, readily available. So I don't wanna wait three days for my development instance to provision, just so I can write one line of code, as close of a parody to production as possible. If we have a production infrastructure that has 50 microservices, obviously we're not gonna run all 50 microservices on our local development machine. But getting as high as a parody to production as possible, making sure we're running the same version of Redis, the same version of MySQL, the same version of Postgres, that we are in production, so that the cost of fixing a bug is drastically reduced in development, compared to once the bug hits production. The next stage is obviously deploying that, and that's the more of the configuration managed side of things. So this is where Puppet, Chef, Ansible, Salt, or just homegrown bash grips really shine, which is I need to start this thing and install this thing and then run this thing. So AppDash get install, pseudo service start and service, I don't know, status to make sure it's running. Fail if it doesn't return a zero. And then the maintenance is where things get a little more difficult is, what does the nth deploy look like? How do data migrations look? What if I need to reconfigure something, turn on feature flags? How do I monitor the service? Or how do I react to particular monitors? So if disk space is high, what do I do? And finally orchestrating really complex changes. So an example of a non-complex change is adding another database server. An example of a complex change is migrating from like MySQL to Postgres and changing the number of database nodes and the way you do replication, perhaps your whole sharding mechanism. So how do you orchestrate all of that? And this is kind of like the big picture. So the idea behind all of these open source tools is that with Vagrant, you have this like awesome development environment. It works as close to production as possible. And then Packer and Terraform work together to build your production infrastructure. And Packer can output Vagrant boxes. So effectively you describe your infrastructure using Packer and Terraform to produce Vagrant boxes. So you produce the Rails front end Vagrant box that you can then distribute to your developers, whether it's like on your internal wiki or just download it from Dropbox and say, hey, this is the Rails front end. And you can ship nightlies of that in your CI by just simply running Packer and Terraform, jettering it a dev box. And every morning your dev comes in, downloads the newest box and they're good to go. It's as simple as running Vagrant up and they're able to hack on your code. As opposed to the typical developer bootstrap process which takes usually like a couple of weeks. You have to install your Ruby's and your Go's and then, no, we don't use RBM, we use RBM. And then you have to implode all of that. And then somehow you got stuff in your system Ruby and it's just always a nightmare. With Vagrant, you have highly ephemeral environments. If you screwed it up, you just destroy it and start over. And it's pretty awesome. Then on the maintenance side, which is something that I'm not gonna talk about too much today, we have console and surf. So console's built on top of surf, which is like service discovery. And we have a lot of tools on top of console, like console template, which can automatically write out ELBs, configuration files based off of service help. So in the event that you deploy something with Terraform and it starts to fail, you can automatically adjust your load balancer and Nginx, for example, to stop sending traffic to that node. When it reports itself as healthy again, it can start receiving traffic. So it's very much leaning towards Skynet and it's pretty cool. So Terraform's biggest goal was to provide a single workflow. So whether you're a developer or an operations person, a single workflow with a unified view, using infrastructure as code, so something that's highly versionable and shareable that can be iterated and changed safely, capable of complex and tier applications. And then the thing that I always add onto here is, but simpler enough for a single tier application. So something that a huge organization, you know, like Netflix, for example, could use to scale out their entire production infrastructure, something that's so, so verbose, so flexible that a company is large with the needs of something like Netflix could use, but also something with an API that's so simple that a tiny startup that has like one to two web nodes and a database server could easily use and gain just as much productivity, just as much use as a huge enterprise organization. So this is Terraform. I guess the short answer to that is we did it. And I'm not gonna go super in-depth into the internals of Terraform today because it's a lot of code and there's also a lot of like computer science and some graph theory that goes into there. And I wanted to showcase Terraform, not showcase graph theory. So we're just gonna look at some Terraform examples now. So this is one of the few slides that actually has code. This is a digital ocean droplet with its DNS configured using DNS symbol. This is using a configuration language that we created called HCL. It's HashiCorp config language. It is JSON compatible but human readable. So we looked at existing things like YAML and JSON and said, you know, JSON is not really human friendly. It's very machine friendly. It's not human friendly. YAML is really painful to write. We wanted something kind of in between. If you see this, it looks a lot like an NGINX configuration. We were inspired a lot by the NGINX configuration. It's very readable, but it also looks like computer code, not, you know, like Nagio's checks and or sent to checks which are kind of like written in English and just didn't feel right. So the idea here is the Terraform configuration is expressed in HCL. It can also be expressed in JSON. So if you have a machine that generates your Terraform files, they can just export them as JSON. And HCL can embed JSON and output JSON as well as HCL. So the first block here is defining a resource. So resource is a keyword just like, you know, if or for, you know, do and other languages. And it's taking two arguments here. The first is the type of resource. So digital ocean underscore droplet is the type of resource. You know, this could be like AWS underscore instance, for example. And then the second thing is the tag or the name of it. So this is going to be like a tagged web. We're naming this particular instance Terraform dash web or TF dash web. And then all of these things are provider specific. So digital ocean droplet is part of the digital ocean provider, which exposes the digital ocean resource. So things like size, image and region are all dependent on digital ocean. So obviously there is no SF01 in AWS. It's, you know, US West dash two. So these are dependent on whatever the provider's implementation is. The next section is the DNS entry. So basically what we're doing is we're saying, spin up this digital ocean droplet. And then once it's done, so this is side to side, you know, in order serial. Once it's done, create a DNS simple record. And we're just going to call the record hello for example.com. And then the value is this crazy thing here, which is this digital ocean underscore droplet dot web dot IPv4 underscore address. So what is that actually doing? So this is Terraform interpolation syntax. And this is where like the true power of Terraform shines. So this is essentially saying the DNS simple record depends, this is an implicit dependency on the digital ocean droplet. So basically we're telling Terraform, I cannot execute this resource until the digital ocean resource successfully returns its data. So when Terraform creates the digital ocean resource in memory, it's going to get back more than name, size, image and region. It's going to get the IPv4 address, any SSH keys, all of the data that digital ocean sends back from the API. And that then becomes available on the web instance. So we named this instance web. This is the internal reference. And then we access it by saying resource type dot name. And from there, we can get whatever property is exposed by that particular provider. It's now safe to execute this DNS simple record. And this is going to be interpolated to something like 1.2.3.4. And then this resource is going to execute. And it's going to create a record using your DNS simple credentials in their API to 0.1.2.3.4 to example.com. So all of the dependencies and the internal dependency mapping are all handled by Terraform Core. So whereas if you were an operator and you were doing this manually, you would log into the digital ocean GUI. You would enter these things, these values. You would click launch. You would wait for the progress bar to go across. You would copy and paste the IP address. You would log on to dnssimple.com. You would create the new record and the dropdown, create all of this. That's all automated. And the dependencies are automatically set up for you. We'll look at a more complex example in a second of kind of why that's important. So you can imagine this is a very simple application, but what if we had 50 web nodes and we needed to create 50 DNS records? Well, we don't really want to create those in serial. As these return, we want to create the records. So Terraform is capable. It's highly parallelized, but it builds a dependency graph for you. So you don't have to worry about any of that. Just state what you need. As I said before, it's a human-friendly config, but it's JSON-compatible for non-humans, like dogs and computers. And it's in a VCS-friendly format. So I'll say this and then I'll go back to the previous slide. Support for trailing commas, something JSON doesn't have. This is a very nice, difficult thing. If someone comes in here and changes image and then submits a patch on GitHub, for example, you're gonna very clearly see this as a red line followed by a green line. We created the language with the notion that people, humans, are gonna have to look at this and understand what's changing. But more importantly, it's your entire infrastructure specified in a single text file, which was the biggest goal, is I want a reproducible way to configure my entire infrastructure using a single text file. I want to be able to share that with my team and I want it diffable. So let's talk about the Terraform Graph. So Terraform Graph is an internal construct and also a command. So if you run Terraform Graph on the example we just had, you'll get this nice output, which is basically showing you the kind of, it's like the inverse of the dependencies. So DNS Simple depends on the DNS Simple provider. It also depends on the DigitalOcean web droplet, which depends on the DigitalOcean provider. So if we take the providers, it's basically saying the DNS Simple record depends on the DigitalOcean record. If you have a more complex infrastructure, this chart can be huge or this graph can be huge. And you can also see which pieces are gonna be able to be run in parallel versus which pieces need to happen one after the other. So let's talk about providers for a second. I talked about the DNS Simple provider and the DigitalOcean provider, but providers are kind of this abstraction from core. So you can think of Terraform providers as basically a plugin. The same way Vigrant has plugins and Packer has plugins, Terraform has plugins. We call them providers. They provide a single integration point. They expose, quote unquote, providing a resource. And some of those are like AWS Instance, DNS Simple record, et cetera. They follow a CRUD-based API. So you can create a resource, update a resource, delete, et cetera. Some of the resources expose CRUD and another thing. So if there's a restart on AWS Instance, for example, that does not create, read, update, and delete, but sometimes you wanna reboot your AWS Instance. They're super pluggable for integration so you can write your own. So if you're using some type of internal cloud that isn't part of Terraform core, you can easily write your own provider. You just have to expose all the APIs and it'll just work. So it's a super high level abstraction. What's nice is if you write your own internal little provider thing and then you decide you wanna move to AWS, all you do is change the resource types and you can deploy to AWS. It's kind of like layer cake where you have a different provider per layer, but it's all a unified configuration. So whether you're a physical, something like physical servers, infrastructure as a service, we talked about open stack already, something like cloud foundry perhaps could throw in there. Whether you're at the VM layer or the container layer, like Terraform has it all. So it's kind of like juju we were talking about before. It can integrate with any layer of the stack. And this all happens with Terraform apply. So if you remember one of the very early goals was a single command, a single workflow. So regardless of whether you're an operations person or a developer or an engineer or a manager, we wanted to provide a single way for you to deploy infrastructure and that's Terraform apply. When you run Terraform apply, it's going to essentially go out and reach into the arms of the internet and report back any errors that happened. We'll talk about Terraform plan, which is essentially like a dry run that Terraform says this is what I'm going to do and then you can approve it or reject the changes. As far as built-in providers in core, we have console, digital, ocean, DNS simple, Google, Heroku, Milgun, AWS, CloudFront and Atlas. There are more planned and we're growing the coverage for Google and AWS. Basically every day we add new features. For those of you that don't know, AWS has like 50 billion possible things that you could configure and we've only been able to configure 49 billion. So we're getting there. But they're all open source. They're all in Terraform core. So if you work on a project and you want Terraform to be able to configure and manage those resources, you can submit a pull request and we'll review it. And if it looks good, Terraform can have support for whatever particular cloud or service or whatever you're offering. So I mentioned Terraform plan. This is really the power of Terraform. This is my favorite part of Terraform. How many times have you wanted to apply Puppet to a system or Chef to a system and you want to know what it's going to do beforehand? So you run it in dry run mode and then it comes back and you're like, oh, this looks good. And then you run it and it doesn't do the thing that it told you it was going to do. It happened to me all the time and it's because of the nature of configuration management. But we're at a level above that. So Terraform plan is actually proven to work. Like mathematically proven and the result that it provides is really, really cool. So we've kind of invented this Diffing style language. So you can see the plus sign at the top there, right there, which says that we're going to add a new digital ocean droplet resource called web. And you can see if the resource already existed but certain values changed, it would be a little tilde. And if we were deleting it, it would be a minus sign. The values that we specify are in there. So if you remember before, this is the same Terraform config that I showed earlier, the image name, the name of the machine, the region and the size were all specified. So those kind of show up as hard-coded values. But you also see these computed values. So those are things that Terraform doesn't know right now. We haven't spun up the digital ocean instance. So we can't possibly know what its IP for private networking is. Or if we enable private networking, we may not know what the internal versus the external IP addresses. So these are computed. If we were changing this resource, Terraform has a state file. It would query the API and get the status, but it would know the IPv4, private networking, backups, et cetera, it knows all of that information so it can intelligently diff its state file from what the API returns. If we scroll down a little bit further, you can see this interpolation thing that we had before. So the first time this record is going to get created, it's going to say, this is basically a reference to whatever the digital ocean droplet returns. But after this is created, this will actually be the IP address of the instance. So Terraform plan was going to show you what will happen. My general rule of thumb is you should never run Terraform apply before you ran Terraform plan. It's one, it's a great way to kind of like sanity check yourself. The parallel I always use is you would never run like get add dot before you ran get status. It's the same approach, like show me what's going to change before I commit those changes. Any actions that it's taking that Terraform is able to explain, it will explain to you. So I didn't have time to create an example for this, but let's say for example, AWS forced your instance to shut down, as it does sometimes, or they had an outage. Terraform will actually explain to you that it needs to start that instance because the underlying provider reported that it was stopped. So you have a more intelligent view of your infrastructure, and you can go on to the AWS status dashboard, which is always up to date and accurate, and see exactly what's going on. Previously before this, it was very, very difficult for operators to manage these types of things. One thing that I haven't discussed here is Terraform has the ability to say, instead of giving me one server or two servers, you can allocate like a pool of resources. So give me a range of these servers, or for every web server give me two database servers, or for every ELB give me five front-end servers sitting behind it. So these complex relationships can all be modeled in that Terraform file. So let's talk a little less technical. So DevOps is cool. It's this really cool buzzword. How does Terraform fit into this DevOps cycle? Or does it divide it? Does this make it like dev versus ops? Are we working backwards? Because it seems like we're taking power away from operators and nobody really needs to tune the Linux kernel anymore. It actually doesn't. So let me explain why. Developers have certain responsibilities that they care about. These obviously aren't the only things developers care about, but these are the primary things. They wanna deploy their application to production, and they want to have their development environment as close to production as possible. Operators on the other hand, they want to define core infrastructure once, so they don't wanna do the same thing over and over and over again. They want it to be secure and they want it to be scalable. So at a high level, taking apart, whether you're a tech company or a product company, that's what developers and operations people care about. Those are their high level concerns. Can I write code and can I run code? And when you break that down, the developers can treat Terraform configurations as a black box. They don't need to understand the production infrastructure. They just need to understand that there's a web node, and they don't need to concern themselves with what version of what package, of what software it's running. They don't need to be concerned with how much memory it has, or how much disk space it has, because all of that's defined in the Terraform configuration file. If they care, they can go look. If they don't care, they can either use Terraform to spin up a cloud instance and use that as their development environment, which is an exact parity of production, or they could use Terraform with Packer and spin up a vagrant box and run that locally on their machine. For operators, all they have to do is modify Terraform configurations. So no longer do you have to SSH instances and manually install packages, et cetera. Obviously, when things go boo-boo, you still have to SSH in an instance, which is why sysadmins are super, super important, but Terraform is extracting a lot of that to the point where you could easily move from one cloud provider to another without any of your developers ever knowing, because it's all extracted into that Terraform configuration. So in a way, the operators become self-servers. They provide Terraform configs, and the developers become self-servees. They consume those configs. So far less responsibility is placed on the developer to understand production. They're just given production effectively, or they're given a copy of production in the form of a development environment. And the operators can work independently from the developers, realizing that the developers are writing code that's targeted to their production systems. So the probability for code not working in production is significantly lower in this scenario. If we look at the same picture from before, the important piece to remember here, the developers are probably working at maybe this level, but the operators might be working at this level, or in this example, I said Dev is working on VM to container, Ops is working on physical to IOS, but your developers might actually have credentials on your OpenStack cluster. Who knows? These arrows can kind of fluctuate, but what's important is the ability to decompose, delegate, and deploy. So Terraform modules are a little bit like Charms, which we heard about in the last talk, which are these isolated components that you can then share. So Terraform doesn't have like a cool charm store like JuJu does, because they're ultimately just text files that you can pass around. So this, for example, is a Terraform module that installs console, and a Terraform module that installs console. It's obviously called console. And then it's saying, pull the module from HashiCorp slash console slash Terraform, and this particular one is gonna use AWS. Give me five servers and run version 0.4.0, which is the latest as of now. And when I shove this in my Terraform config, it's gonna give me console. So applications have the ability to write Terraform configurations, and I can pull them in just the same way you can pull in a Chef community cookbook or a puppet module from Puppet Forge, et cetera. And then in my DNS record, so this is DNS simple, for example, I can say for console, set my domain to whatever the module's IP address is, which is something that the module exposes. I'm not gonna go into how to write a module because it's kind of like an advanced Terraform topic, but all of the docs are online. This is just an example of using it. So this is gonna give you five AWS nodes, each running a console service that work together in a cluster, and then set the IP address for DNS simple for those particular records. So modules provide an easy way for component abstraction. So whether that's something that you wanna do internally to your organization, so whether you're a big organization that has a bunch of microservices and you wanna be able to have a finer level of control and have this team working on this one and that team working on that one, or if that's something you wanna share with the community, you have a popular open source project and you want to be able to deploy it with Terraform, you can share these individual components called Terraform modules that are kind of plug and play. They're actually less than plug and play. They're actually copy and paste, which is something that I think is actually a lot easier than downloading some tar ball or installing some software. It's just literally copying text into a file and it works. So abstracting all of that out makes it a lot easier to test as well for unit testing purposes. It provides higher level reasoning, so instead of having 20 lines of configuration that say I need console, it has this, this, this and this, you can extract that into a Terraform module and then your higher level abstraction just says, give me console with five servers. And the person writing that Terraform configuration doesn't need to know all of the implementation details of how console is deployed or what it's listening on or what ports it's listening on. Those are all configurable, but they have sane defaults that just work. They're highly reusable. So across your entire infrastructure, shareable is another point of that. And kind of I already touched on maintenance delegation. If you work in a large organization or if you have a particular group of people who specialize in software. So if you have the MongoDB people who are really good at MongoDB, like they can write their MongoDB Terraform files and, you know, write the API on top of it and just give that to people. So the whole organization, the whole operations organization or the whole development organization doesn't need to know all of the internals of MongoDB. They can just look at this API, realize this is how I set up MongoDB. This is the company standard and run with it. So it's much easier to hit the ground running. It's much easier to deploy not only an entire infrastructure, but just a simple little application with really just one command, Terraform apply.