 Hi everybody. My name is Kyle Fox. Thank you for joining the first-ever Fed Dev and OpenGovCon event. This is sort of an experiment. What we're trying to do here is to connect the contributors of open-source software, the users of open-source software with technology leaders in the public sector and private sector so that we're building better digital infrastructure together. So today I'm excited to have one of my good friends Dan here provide a really rich set of content that talks through how we build software that is safe, secure and effective in highly regulated environments and how to do it in a way that's repeatable, that follows commercial best practices like get-ups, patterns and things like that so that teams are able to not have to reinvent the wheel every single time. So again with that quick intro I'll go ahead and hand it over Dan. Hello. Thank you so much Kyle. So today my name is Dan Fedek. I'm a solutions engineer here at HashiCorp and welcome to Fed Dev. If you want to find me on the Twitter, I'm at Dan Fedek. I'm also on LinkedIn and other places. I don't really post much but you can find me there if you'd like. So today we're going to talk about, I actually work at a company called HashiCorp which is the maker of Terraform and Vault and a bunch of other platform tools that I'll actually review today. And we're going to talk about HashiCorp's multi-cloud operating model and zero trust and how all of our tools can kind of give you access to build provision, secure, connect and run any application across any environment in the cloud or on-prem. And before I do that I just want to introduce I guess myself. Again that's me. Actually before we get started there's a URL here. It's the bit.ly3pp. So this URL here is how you're going to get if you want to actually ask questions in the virtual room. This will get you to a Slack channel. Sign in to the Slack channel and in that room we'll have access to the lab and we'll have access to this PowerPoint deck so you can bring all the resources that are in links to this. You can bring this home and you can work with it. After today you'll be able to take this home and actually use that link to do the labs at home. So we're going to be reviewing Terraform and LabFormat, Console, Vault and Boundary. So after today, after you've had all your questions you should be able to go home and actually review this and rerun the lab as many times as you want over the period of the next two weeks. So in that case, I put them in the general channel in that Slack room. So before we get started I want to talk a little bit about the company because we don't, a lot of people actually know some of the applications that we have like Terraform or Vault but we don't really know who HashiCorp is or what we are. So HashiCorp is a commercial company. We're public on the stock market. We're HCP on the stock market, New York Stock Exchange. We primarily started off as an open source company and then we have offered enterprise options for all of our tools, for most of our tools. We have a product line of eight applications and another one coming that's more around the development and UIUX. But for right now these are the eight platform tools that we have. We actually divide them based on where they fall and are what we call the cloud operating model. So in our infrastructure tier we have Packer and Terraform. This is our images as code, our infrastructure as code. Console is our service networking, our service registry. Boundary is our like zero trust network access tool. Vault is our secure credential and credential management application and identity broker. And then we have in our applications here we have Nomad. Nomad is our orchestration tool for running applications. And those applications could be anything from a container like you run in like a Kubernetes environment. But we can also run VMs. We can run Python scripts and Java jars. We can run basically anything that you have as a workload. We can run in an orchestrated environment in a very large scale environment. Nomad actually one of our interesting talking points around Nomad is our largest Nomad cluster is 100,000 nodes and 40 million containers all in one cluster, which is pretty impressive. And then we have Waypoint and Vagrant. Vagrant was actually our first application. Vagrant was kind of the developer standard for standing up applications quickly for development environment. So if I wanted to spin up a dev environment for my developers I could basically codify those steps and then do a Vagrant up and I don't have a development environment to work through. And Waypoint is a newer tool that we have that allows us to deploy applications across multiple different types. So like Kubernetes, I have an EC2 instance with a load balancer in front of it. I've got some bare metal infrastructure. Waypoint allows us to codify those different types of deployment methods and then do a Waypoint deploy. So it's a comment. The idea is that we want to be able to provision, secure, connect, run any application across any environment and deploy with a common workflow. And that's the biggest thing here. And that's where the multicloud operating model comes from. So when we first started out, I don't know how long have you guys been doing this for, but I've been doing this since around 96. So for some people that's the young guy in the room, for some people that's really old. So in 96 I started working in the military on some interesting physical servers that did some cool stuff in the military. But after that I kind of left the military and did a lot of the commercial things like racking and stacking servers and data centers all over the place from AOL to marketing analytics platforms to Oracle. And over that time, I learned a lot about server storage and network. But then all of a sudden there was this point where I actually went and I helped start a startup. The startup was called ZEUS. We were a marketing analytics platform and we had to make this decision because we were right on that precipice of, hey, are we going to adopt the cloud as a first-class citizen in our environment or are we going to actually stick with physical servers in our physical data center? So we ended up doing a hybrid approach of course because we were all used to the physical traditional data center, but we started to move certain workloads out into the cloud. Some of those you'd want to, the biggest thing we wanted to do is make sure that we could scale our application. We wanted to be able to scale that out. We actually had a Super Bowl ad that we wanted to work on and that took, we went from a modest 200 users a second all the way to like 50,000 users a second, all in a period of like an hour, which was an insane amount of bursting, but we couldn't do that with our normal physical resources. We had to scale out into the cloud. So when we move into the cloud, whether that's your private cloud, your AWS, your Azure, your GCP, your OCI, whatever your cloud environment is, you're going to have a new dynamic environment to work in and with that dynamic environment comes a change in thought. We have new principles that the cloud introduces and we break this up into four layers, main layers from a platform perspective. So the first layer down below here is the infrastructure layer. I kind of alluded to it earlier in the earlier slide when I was talking about the company itself, but we have the infrastructure layer. We went from this world of a traditional data center where we had static servers that were racking and stacking and now we're using infrastructure as code. We went from a security model where the IP address was the standard for security like this, we want to grant permissions from a firewall perspective or grant permissions from a VPN to a specific IP address. And as we move into the public cloud, we can't do that because IP addresses are brittle, they're not as dependable because they go away all the time. So we're moving to a new concept of security based on identity. As we move up the stack, we went from host-based IP addresses now to a service base. So we're actually moving up the OSI model to the application layer as far as connecting different applications together. And at the application layer in our multi-cloud operating model, we went from multiple applications. I actually don't understand why they have multiple multiple, but we have multiple applications. We're actually thinking about monolith versus microservices in this layer. So now we're moving into a microservices world. We're taking this monolith and we're breaking these different functions up into actual services that we can then scale differently based on what the application does. And with that comes all these different principles, but also different endpoints. So now if I'm in VMware and I'm in AWS or I'm on-prem, I have now new ways to do infrastructure as code. I have new ways to do security. I have my identity providers, LDAP, Active Directory, IAM, Azure Active Directory, GCP IAM. I have all these different identity providers that I'm working with. Now I have different endpoints for that. For networking, I have VMware's NSX, CloudMap, OpenSM, Google Istio, or just Istio in general using Kubernetes. And at the application layer, I've got a million different orchestration tools, Docker, Swarm, ECS, EKS. So we have a different way to do all these things and everything's very dynamic now. It's not just load an application server onto a physical server and turn it on, point to the IP address from a firewall and we're done. So things are moving around a lot more. And where we come in to have a conversation is really in the infrastructure security and networking. And even at the application layer, we, Nomad is our first-class citizen from a HashiCorp perspective. But we also work perfectly with Docker, Swarm, with Nomad, with ECS, with EKS, with any of the Rancher tools. Console and Vault, Terraform can provision that infrastructure. Vault can secure it. Console can connect all the different applications together just like an Istio would. So we're going to go over some of these tools today in the labs. So as we've kind of moved into this model, we've shifted from the traditional data center, there's really three stages, right? The first stage is, okay, I went through this process. We're now going to start turning on servers in the cloud. What do we do? Well, the first thing we do is we start figuring out what the cloud is. We log into the console, we figure out how to create a server. Well, that server needs to go in a VNet or a VPC and that server needs to have security groups and route tables and all kinds of crazy things. Well, we learn how to do that by clicking through the console. Now, the second time somebody asks me to do that, I'm going to get kind of annoyed I have to do the same thing. And it's not only an annoyance for the platform team, but it's also, you end up creating resources that aren't the same. So you have these disparate one-off servers that aren't tagged correctly, that have different IP address schemes and all kinds of different things that you need to actually take into account. So as we move forward into stage two, we start standardizing, right? We're no longer clicking through the console. We're now thinking through things like let's build a platform team that can manage all these tools. We are building infrastructure as code. We're creating modules that we can share with different organizations. We're creating policy as code. We're creating images as code so that we can create common patterns and then share those across different organizations. I was talking earlier about Spotify. They have this concept of a guild within an organization or a community of practice. And normally what you'd have is multiple teams working together. So multiple teams are using Terraform. Well, why don't we share as an organization the modules between those different organizations? So now I can just take those different modules and I can just change the variables based on the application that I need to use in the future. And as we scale even further to the right, now we've taken all these things from the public cloud and we can actually move them back into our private data center. We can actually say, hey, let's create a private cloud that uses the same tools that we use in the public cloud and now we can bring some of those learnings from the public cloud back into our private estate. And obviously the three big things that we talk about a lot, this is kind of funny because I always forget what the three big things are. It's cost, risk, and speed, CRS, which I think it's funny because they can't remember stuff. So yeah, anyways, cost, increasing efficiency, managed cloud spend. So for me, when I was running my platform team at Zaeus, I always had to know what was coming in, what was going out from a FinOps perspective. It's financial operations. I needed it to be able to tag all my resources. I needed to know what this part of the stack cost us and then how much is that delivering as far as value is concerned. As far as risk is concerned, this is something that means a little bit more to the government, well, a lot more to the government. We need to be, because we have an increased risk to attacks, our critical infrastructure is always under pressure. We have weapon systems. We have DOD organizations that are all under pressure as far as risk and security is concerned. So how do we reduce that risk and centralize policy enforcement? And we'll talk a little bit about some of the features that we have with Terraform and all of our other products, where we can add policy as code into the workflows of these applications. And then speed. When I was deploying applications, the first time I deployed, I was clicking through that application stack, it took me a good while to do it all. We'll call it the 10th time that I did that same thing over and over again because I'm a slow learner. I was able to then just change a couple variables and then redeploy all those applications. And I was able to get to market or we had a time to mission was a lot faster, right? So I was able to get that application out the door in a day or in an hour versus a week of me trying to click around and do things. So again, HashiCorp partners with basically on the Cloud Operating Model. Again, our layers provisioning infrastructure with Terraform. Packer just kind of, I want to reiterate what Packer does for you. Typically, once you've provisioned your network resources, you then have to build a server. That server needs to start somewhere. It's usually starting from some sort of golden image. Well, how do you create that golden image? Well, you take a Windows or a Linux base, you turn it on, you run a bunch of scripts or a bunch of commands on this clone, and then you basically take that clone and you make another clone of it, and you save it somewhere. Well, Packer allows you to codify that whole process and do a Packer build and then push out all of those artifacts out to the different regions you need to push them out to. And you also get the benefit of using CICD workflows and build jobs. So you can put it through that same governance, that same approval mechanism that you have to if you're a developer and you want to approve your code, right? And as you move up to security, you know, boundary and vault, console with a networking, and Waypoint and Nomad for our application stack. So this is kind of where we play. It's squarely in the app dev slash platform team. So we're really, anybody who's working in this stack is usually supporting the app dev team. It's usually a platform team. And the platform team concept has kind of evolved over time because we've gone back and forth with dev ops, dev sec ops platform team. But this is kind of the squarely where the platform team lives, everything from provisioning the infrastructure to deploying the applications in support of that. So our first product that, you know, at the base layer of this infrastructure or of the cloud operating model is Terraform. And after we go through some of the features of Terraform, we're actually going to do a instruct lab on Terraform. And we can actually walk through like building Terraform code and basically show you what this looks like. But the biggest problem with the multicloud, you know, we went to a dynamic environment, we're going multicloud. And when I say multicloud, it doesn't necessarily mean that we're using Google and Azure and Oracle. It could just be that we have an on-prem presence. And we have one cloud right now, we're multicloud, right? We have private cloud and public cloud. So this can be very slow because we have new, you know, dynamic end points that we have to hit, we have new things we have to learn. So how do we make this a faster process? We do it through infrastructure as code. So what you see here is a picture of, you know, the practitioner is kind of the platform engineer, right? He's writing the infrastructure as code. He can start with modules that are in our public registry. We have thousands of modules that we can use in the public registry. In some government organizations, it makes sense to put restrictions on the type of modules that we're borrowing. We should put scanners around a public module. But these are all public registry, just like you would have like a Docker hub, just like you would have a GitHub. If you were going to borrow somebody's code, you have to go through the same mechanisms of code scanning and the lifecycle of software that you're bringing in from an open source perspective. The practitioner borrows from the community or can write their own modules and then they can run their plan and their apply. So the plan and the apply is basically, hey, I'm about to build some infrastructure. What is it going to look like? And it'll actually, a plan will show you the code that you wrote and then the resources you're about to build and the apply will actually go out and actually apply those out into the world. We have, like I said, it looks like here we have over 2,900 providers. So all these different providers, you can do everything from provisioning your Kubernetes stack to provisioning VMs to provisioning resources in your Cisco, your Palo Alto, your Prisma Cloud, all the way to provisioning a pizza if you wanted. There's actually a Papa John's provisioner if you wanted to actually borrow. Maybe we should do that today. That should be part of our thing is order a pizza in here. That'd be kind of cool. And then because we're using the open source providers, we can actually borrow from the community, which is very important, especially as an organization like this where we can use other people's work and as part of the open source community. What you see here is an example of two resources that are being built. One is a Instancing GCP and one is a DNS record that actually points to that. Everything with an Terraform is actually written in what we call HCL. HCL, if I was to put HCL on the political spectrum, it would be a moderate, right? Where YAML would be this beautiful, elegant language that only cares about human reading and JSON is this very structured, but fast way to do things over here on the right. And we have multiple commas and the machines rip through that, but then the YAML, it's really made for humans. HCL is a little bit of both, right? Because we not only have to read the actual code, we actually have to write it as well. So writing it is very simple. There's a lot less commas involved. This is an example of those two resources and you can see that there's dependencies between the two resources, right? The second resource depends on the first one. You can see it points that value points to the value that outputs from that resource. It's declarative, touring complete language, and then we also have versioning that comes along with using GitHub or Git connectors. So as you can see here, we can do Azure DevOps, AWS, Bitbucket, GitHub, GitLab as the kind of mechanism for Git repository. So we can write the code, put it in our VCS repository, integrate that with our code. We can do branching and forking just like a developer could, and we can put those through our normal CICD workflows, including things like approval, which is very important when we're trying to actually do things in a governance manner. And we want to make sure that all of our applications have approvals before they get deployed. Oh, also, I said we can write in HCL. That was the first language we were writing in, but we also have the CDKTF, so Cloud Development Kit for Terraform. If you're already an application developer and you want to write in TypeScript or you're used to Java, I don't know why you'd want to write in Java for deploying an infrastructure, but you know, hey, we've got Python, Java, C Sharp, and Go to provision that infrastructure. So if you're already familiar with a programming language, you can use that with the CDKTF, which is really nice. And again, we're going to reduce risk, reduce costs, and reduce increase agility by using and adopting this infrastructure as code. Again, version control system. This is a picture from our enterprise or our cloud offering that shows actually policy as code and version control from a plan and an apply. So what happens when we're using our cloud or enterprise, we can actually see a change happening in our VCS repository. That change kicks off an automated build process and it can run a plan or an apply based on how you actually set that up. You can say things like if it's on this branch, then do a plan. If it's on this branch, do an apply. You can put policy checks in. It's very nice as far as like, hey, I want to act as if we're developers, which this basically is development, but you're developing actual infrastructure. We also have, so this is Terraform Cloud and Terraform Enterprise, but GitHub app authentication. So if we're going to communicate between the Terraform product in GitHub app authentication, we can do through with their actual application authentication instead of just the token and infrastructure state. So just kind of a little bit about Terraform and how we manage state. So we saw the resources that were written in HCO. That's in a document in language. If I'm in the directory with that, that main.tf, so it's a Terraform file, I do a plan. It'll actually read through that and then it'll actually say, okay, is there a state file here or wherever we tell Terraform the state file is going to be? It can be in the cloud. It can be in an S3 bucket. It can be locally in a JSON file. We tell that infrastructure, we tell the cloud where we want that infrastructure state to live, and then it says, okay, you're about to build this. What's out there now? If nothing exists, it'll create a state file. A state file is literally just a JSON blob with a bunch of infrastructure that's already there. And then it does a diff between what's about to happen and what's there already. So what it shows you, these are all the runs that happen, and then they actually tie to Git commit. So you can see the Git commit here is tied to all these different runs. And then it'll actually show you, this is what I want to show you. Actually, I just found out I could do this. This here is when we do a plan, you can actually see the resources that are being created. And you can say, hey, all these green pluses here means that we're actually going to build new infrastructure. If it was a change to existing infrastructure, you'd see a yellow tilde. If it was a deletion of existing infrastructure, it would be a red minus. So you know, hey, I'm about to do this infrastructure change, what's going to happen? So instead of the green, you'll see a bunch of minuses or changes. At the bottom here, you'll see what's changed. You'll see that we apply it was complete, we added one resources, we changed three resources and we deleted something, right? So you can make changes and it'll tell you beforehand and after what happened or was about to happen. This is really nice. And with Terraform Cloud and Terraform Enterprise, it actually shows you more in a UI model, but this can be done from the command line with the Terraform command. One thing I didn't mention about all of those tools that we saw up on the board, those are all open source tools. We do have enterprise offerings, but all of our tools, you can download today from our website at hashiacorp.com. So Terraform, Console, Vault, all of the stuff we're going to do today can all be done with open source tools. You can go download. Last but not least, we have actually one of the last things, our Terraform runs. You can actually go back and see all the runs that have happened, which shows you your state. And then modules. This is really important. So again, just like a developer would create a new module if they were doing the same thing over and over, you know, we have the dry concept, right? You don't want to do the same thing over and over again. So you create a module or you create a library that you can actually use later, right? So in the Terraform world, it's modules, right? And I can store those modules in a public registry. I can store them in a private registry using Terraform Enterprise. There's a Terraform, a private Terraform module that you can say your organization had a set of library policies or security policies that you wanted to share within your organization. You could put those in your private registry and everybody else in your organization can use those. This makes it a lot faster, right? So I can be a developer and I want to just be able to have a new server with a load bouncer in front of it with my Route 53 record or my DNS record. And I can quickly take that module that was already pre-created because it's something that I do over and over and over again. I created a module for it. All I got to do is grab that module, change some input variables, and then now I can deploy my infrastructure out into the world. Again, the three big cost, risk, and speed, right? We're using modules like the biggest use for me was I would actually look at, you know, I started walking through the SRE handbook from Google. One of the things they talked about was actually walking through your toil, right? What is the toil that I do every day? I'm getting the same tickets. I'm writing DNS records. I'm adding users. I'm updating Vault credentials. Those are the three things like I felt like I was doing all the time. How could I put that into a module and then write it so that I can then tie it to something like a ServiceNow or a Jira ticket? So that came in, I could actually, once it was approved, I could automatically apply that using Terraform to build it out in the infrastructure. Again, that's a great way to increase speed, right? And reduce risk because we can actually have the same way we do things every single time using those modules. Policy as code, I kind of touched on this a little bit. All of our enterprise tools have an application called Sentinel. Sentinel is our policy as code framework. With Terraform Cloud, we can actually check against a policy library to see if we can do things like check if an S3 bucket is public, right? So when Amazon first came out with S3, when you'd create a bucket, it would automatically create a bucket and it would automatically be public. So a lot of people were creating buckets, not realizing that they were in their VPC, necessarily, and it would create public S3 buckets that were available to everybody. We can create policies that say, no, don't do that. We can create policies that say, hey, if you're going to spend over $500 a month or $1,000 a month, ensure that you're not, you know, or put a constraint around how much money you're spending. So if you do spend, you're trying to spend over $1,000, somebody's going to get notified. So these are guardrails that you want to add in. And a lot of days, especially today, if you're in the government, you have to deal with a lot of policy, a lot of governance. We're going to go through a little bit of the actual rules and regulations, but all these can be codified. So if we have some dusty handbook somewhere or some PDF that says, here's all the rules you need to abide by to actually be operational, to meet an ATO, we can write those and codify those with policy as code. Here's an example of some policy as code. You know, we're basically saying here, make sure that if you're going to deploy, you have to use it with this specific VM size. So you can create a list of all the VMs you're allowed to use. And you can say, if it's outside of this list of VMs, like, don't spin up an i3 metal XL5, whatever, and it costs you $50,000 a month with, you know, when you spin up five of them for a cluster by accident, that would actually be triggered, right? So there's a bunch of things you can do from a cost perspective, from a policy perspective, from a, you know, governance is concerned. And then you have different options for these permissions, right? So I can say, I can have an advisory warning. So that plan runs and says, hey, you're about to spend over this amount of money, you probably shouldn't spend that much money or at least talk to your boss. You have a soft mandatory where an explicit override is required. You have a hard mandatory which basically blocked provisioning. Like if you're trying to provision S3 bucket, that's something you're not allowed to do in a public, a public facing S3 bucket, it's just gonna all out fail. So that, that job won't run. So if you think about this from an ATO perspective, from a government perspective, I'm trying to get an authority to operate. I might have a list of, you know, 300 items from a, you know, from a stick that I have to go stick my environment out. What if I could codify all of those different events into a policy framework? And I can use that. So before I deploy my infrastructure, I can check against this policy so that I never deploy infrastructure that isn't, doesn't meet these requirements. This is a pre ATO almost work, right? So if somebody comes in, you can say, here's the libraries that we used. Everything that was deployed went through this system. And now you can do your ATO knowing that we also have policy libraries. So if you go out to the public registry, the terraform registry, it's registry.terraform.com, I think. You can actually look up policy libraries. So we have GCP, Azure, AWS best practices for policies. So you can actually go out and see those. We're actually talking about working with a lot of the government organizations to see if we can come up with our own policies or have somebody publish those to public registries for existing like CISA or NIST requirements that we can use and apply against our infrastructure. So some examples. This is some NIST policies, right? We have our security and privacy controls for information systems. We have our zero trust architecture framework. Engineering trustworthy, secure systems, right? These are all policies that we can apply before we deploy infrastructure into the cloud, whether that's your private or your on-prem. And we have the CISA, zero trust architecture, right, our different pillars. We're going to go through some of these pillars today, right? Identity, networks, application workloads, data, governance. These are all things that we help touch. So we don't touch every single one of the pillars, but there's a lot of them here that we work through, right? Out of all of these, I think, I mean, there's not many we don't touch, right? Identity devices, network, application workloads, data, visibility and analytics. We have metrics on everything we do. So you technically have the ability to make sure that it's always monitored. We can also do audit logging, automation orchestration, that's Terraform, governance, that's policy as code. And at that, right now, we're actually going to go through Terraform in the lab, and we're going to go through and actually work on building out some infrastructure with our lab. So I'm going to go back to my laptop here, and we're going to start that lab, and I'll walk you all through it. See if I can remember while we're doing that, let me just check the Slack channel to make sure nobody's asking any questions. Okay, cool. I'm going to go to the General Channel, and then I'm going to go to the lab in here. So at this following link, hands on with the following links, I click on that, and I should come to a page that has a few pieces of content for us. Today we're going to go through the infrastructure to Terraform on AWS. We're going to do that first. And then we're going to do a little bit more talk around Zero Trust and how HashiCorp works with Zero Trust, and we're going to go through building an application, securing the credentials in that application, and then creating a PKI CA between our applications. So all of our NPE or non-person entity communication of our applications can all communicate securely with least privilege. And if we have time, we're going to do an introduction boundary, which is our tool that allows us to access those credentials from a network perspective. So we're going to go through the introduction to Terraform, so we'll hit the start on that. And it takes a little bit because we have to actually spin up an environment, but this is infrastructure as code on AWS. We have a couple, one for Azure, if you're interested in that. Anything that you'd like to see from a HashiCorp perspective, we probably have a lab on it. You can hit me up either on Twitter, daniel.fetic at HashiCorp.com. You can hit me up there. And we can give you access to all kinds of resources just to get people hands on. So this takes a couple minutes. So the Terraform command line tool is available. So that's one thing. All of our tools are basically written in Go. So for the most part, we should be able to download any of our tools if we're going to release this page on HashiCorp site. We can get all of our applications. As long as you can compile a Go, you can run it on that platform. So Mac, FreeBSD, OpenBSD, Windows, Solaris, Linux, this goes for all of our other tools as well. So Terraform, I'm going to run it. In this, we're going to run it on a Linux machine. Terraform language is designed to be both human machine readable. We talked about that. Yeah, so if you use VS Code, you can use the HCL syntax highlighting. Also, there's a command here. I don't know if we're going to go over it, but if you're using that syntax, if you're in a Terraform directory, you can always type in Terraform validate to check to see if that Terraform is clean. And then also you can do Terraform format to actually format it and make all the equal signs that the lint are basically. I think we'll probably go over that a little bit today. So again, Terraform from an open source perspective is literally just a binary. It can run on anything. You write the code and you put it into a directory. Anything in the directory without Terraform exists. If the Terraform command is running, it's looking for a .tf file. So I can break out my Terraform code in as many ways as I want. I can put it all in one file or I can break it out from my readability into things like variables files, input and output files, local files. I can break outputs, all kinds of stuff. So we're going to hit the start button here. I just want to make sure we can see everything. We're good? Okay. Okay. So up top here in all these instruct labs, we have the ability to create separate tabs at the top. So what you're seeing here is just a shell editor. We also have a code editor. So we can actually look at the code from like a VS code type application. So this is what they were talking about the syntax highlighting, right? Saying when you edit any of these files, you'll see a blue dot in a disk location. So just in general, this is just telling you when you're making a change to one of these files through the code editor here, like say I had an enter here, you'll see a little floppy disk in the tab. If you want to make a change, you actually have to hit the save button or the floppy disk and that'll actually commit it to disk. I think it's just one of the features of this code editor they're using. So you can see here we've got our main or outputs Terraform.tfvars and our variables. And we have our licenses all in the same directory. And then we have our shell command and then we can do Terraform show, right? Or something like that. We can run the Terraform command. It's all available here. So we're just going to hit the check button. So at the end of each one of these tracks, so when we enter this, we were in a challenge or sorry, we're in a track, each one of these new, we'll call it events here, or basically they're all different challenges within the track. So we just went through the first challenge, which is basically just look at the different ways that the instructor's set up. So Terraform open source is a command line application you can download and run from your laptop or virtual workstation. Again, written in go, you can run wherever you want. Okay, so getting to know Terraform. I'm in the directory. I've got some Terraform code here. First thing I want to do, just to see what version of Terraform I'm using. Another thing about this instruct platform, if I don't want to type, I can always just hit come over and actually click on the code block and it'll actually copy into your buffer. It just makes it easier to do this if you don't feel like typing. And then you can paste it Terraform version. Okay, so we're using 1.1.4. There are newer versions of Terraform. We're not going to update Terraform right now, but it's as simple as going out to the web and downloading the latest version. Now when you create Terraform, your Terraform code is versioned based on the version of the Terraform binary you're using. So there are some different features as Terraform upgrades itself, as we make changes to the Terraform binary. There's some code block things that change, right? There's certain things about the code that can change. So you just have to know, and that's why we actually have a block within Terraform that says, we're using this version of Terraform to apply this code. So Terraform version, of course, is a Terraform help. You can see all the different versions here. We have init, validate, plan, apply, destroy. And then again, you saw the code editor. You saw those different versions here. And yes, we do know that the HCL stands for Hashtag Corp Configuration Language because we already talked about that. Okay. So first step is, hey, I want to create some AWS infrastructure. So in order to authenticate to AWS and build those resources, Terraform requires to have what's called a back end, appropriate for an IAM policy, or it's the provider. So let me just show you real quick. Say I wanted to build some AWS resources. The first thing I would do is I'd say Terraform AWS provider. So there's probably a provider from, there's a provider for every major CSP, Cloud Service Provider. In this case, we're going to use the AWS. So if I click on this Terraform AWS provider, I'm using the standard AWS and Hashtag Corp built AWS provider. The first thing I want to do is look at use provider. In here, you can actually see that we provide a provider block. So in this provider block are things like, we want to use the AWS provider. We want to use the provider version 466.1. So those can change based on new features that AWS are coming out with. So if AWS comes out with a new version or a new, maybe a new tool, and they want to add that in, they have to bump their version and add the code for that new AWS resource. And then the configuration options are things like your AWS credentials, the region you're going to work in, that kind of stuff. So this is the first step I always do. I'll look for the provider and I'll actually go in here and see the example. So the AWS provider is one example, but they'll show you all of the different inputs. And they'll show you examples on how to use the provider below. So back to our instruct lab here. For this training environment, we're going to actually use the AWS access key and secret access key given to us by this instruct lab. So with instruct, we can say turn on AWS and they'll actually provide us root level credentials for an AWS account. No spinning up, you know, Bitcoin mining while we're in here. You've got two weeks, it'll turn off after a few hours of us using it. So, but you can use these AWS access keys in here. And then once this lab expires, the keys go away, the whole account goes away. So you can see here, echo, you can just type these commands in, echo, and you can see those are our creds. You should see the valid AWS keys. I can do something like, I don't know if the AWS commands, yeah, AWS STS get caller identity. Oh, I guess it's not installed. I thought it was. Never mind. Well, you could do that if you had it, but you do have the keys. So Terraformal still work with just the keys because we're using the provider instead of our AWS binary here. All right. As we move into the next one, it just kind of talks a little bit about the different types of files. So Terraformal read anything, I talked about this a little bit, .tf or .tf vars. By convention, Terraformal workspaces will contain a main .tf. This is kind of, this is a pretty strong convention, main .tf. The variables and the outputs files, those, a lot of people use, I actually break out a providers of variables and outputs, a .tf vars file. And if sometimes I'll do things like what they say here, a load balancer .tf. If I have a huge amount of code and I want to break those up so I can quickly get to the resource that I want to, maybe I'll put the load balancer .tf in a separate file and then I'll be able to get to that pretty quickly. Okay. What does Terraform code look like? We'll just do a, again, here's my three files, main outputs and variables .tf. So the big, so what do they do though, right? So we have, there's two types of variables. We have an input variable and an output variable. That's the first step. So an input variable is declared in your variables .tf. So if I have my variables .tf file, I have all my variables that I want to declare, right? So when I say I declare them, that means I can use them in different ways. So a lot of times what we'll do is we'll set same defaults. So you can see here in this variables file, we have a default equals something. In the admin username, we have default equals hashi corp, right? So if I'm in a production environment, I might have a set of declared pre-provisioned variables that I want to be the same as same defaults as possible. And then if I want to make a change, I can still write an override. I can declare that variable and apply my own through a .tf.vars file. And I can say, no, I want the username to be dephetic or whatever, or the height to be 600, I don't know. And then we have, so those are input variables. Those are variables that we would take. So for example, if I was writing a module and I wanted to give the ability to change the instance type, I would have an input variable that would say instance type, right? So if I want to just create a small instance on micro, I could say instance type equals and then the variable for the instance type I want to use. And then we have output variables. So an output variable is the value that I want to be able to take somewhere else, right? So I've created all these resources. I can go find the resources that I created and I can access those resources and then use them somewhere else in a different set of code, right? So I can have a remote state from that state file that I'm using. So I can say, hey, you've created an instance or a VPC. Almost everything I do is going to be in a VPC. I'm probably going to have to know the VPC ID. So I declare the output variable as a VPC ID and then I can go pull that into some other Terraform code. And I can say input variable is data.resource.vpc.vpcid and I can use that instead of having to change the input variable every single time that VPC changes or the subnet changes or the route tables change, right? Those can automatically be populated downstream to other different types of resources. You'll see here that we have, I don't know if we're going to go through this, but I'll talk about it. In our input variables, we have orders of precedence, right? So our tfvars file is the variables that we want to override the standard variables.tf files. We also have locals, a local variable. That's something that you want to change if you're going to maybe write some code on the fly as I'm about, you know, at runtime that I'm about to apply. I can then call out and then write that, take that variable and then write it into my variables file and use it in my code. So that's it for these three files. So we got our main, our variables at tf, which is our input variables and our outputs, which can be used in other resources. Okay, like I said before, we have, we can go out and browse the providers at this registry URL and we can actually, there's, like I said, you can do anything from provision AWS to other cloud service provider resources to order yourself a pizza. You can do whatever you want as long as there's a provider for it. So you can go out and browse the different providers you have. This is very important if you're running. So a couple other ways that I use providers, you know, if you're using something like cloud formation, cloud formation is very good. I like cloud formation. However, it works on AWS resources only. So say I wanted to provision my snowflake database, which is a data warehousing SaaS platform, or my, my GitHub repos, I want to make sure that every GitHub repo has a specific set of tags. They're a part of a team. There are specific things that I want to apply to that GitHub repo. I can do that through Terraform. I can use the GitHub provider and provision all my GitHub repositories the exact same way. So you can do things other than just AWS resources or GCP resources. You can use your SaaS providers and you can even write your own provider. We have an open source provider language. So you can actually write your own provider if you want to do things for yourself and build your own provider. If you're in the app dev world, which a lot of people here are. Okay. So I've built my Terraform code. The first thing I want to do is init that Terraform code. The init process will go out and say, okay, you're using this provider. I'm going to go download that provider onto my local directory. And that's what you'll see here when we do this Terraform init command. It'll create a provider's directory. It'll also create start state file. It can be empty if we haven't done anything yet. So we're going to do this Terraform init first. Okay. It's successfully initialized. And then we have a .terraform directory. So cd to .terraform. You can see there's a provider's block. All right. So we're going to do a quick check and move on to the next challenge. It's quiz time. So if you'd like, we'll ask questions and you can answer in the Slack channel. Providers and modules, where does Terraform store its modules and providers? Anyone locally want to answer? I'm going to check the Slack channel in case anybody's actually answering. Looks pretty clean in there. The answer is in your .terraform directory. And I hope I answered this correctly so I don't get in trouble here. There you go. That's right. Okay. I actually did talk about this as well. Terraform has a built-in syntax checker. You can do a Terraform validate. So if I'm in that directory, I can run the Terraform validate. It'll check to make sure that all of your Terraform code is written in the proper syntax. The Terraform validate command is actually run whether you run validate or not when you run a plan. So if I do a Terraform plan, the first thing it does is do a validate before it actually checks the resources. It'll just check the format. So let's just run that real quick. All right. The configuration is valid. So another thing I'll do just to show. All right. There was something in my main.tf file. Actually, it's probably the space that I entered in the code editor a while ago. You can actually see that main.tf. That's spaces. Now the space is there. But basically, if these were off, let's do this. Save it. And then we'll run that command one more time. Terraform format. Hey, it found something in the main.tf. And then you can see that it changed the syntax there. That's just a linter. It's nice to be able to read your Terraform code. All right. We'll check that. Okay. We talked about Terraform plan earlier. The plan, again, is going to look at the code that you've written, the variables, the outputs, all of your main.tf in your tfvars files. It's going to see, okay, I have a state file that you've declared. What is the difference between what's in that state file and what you're asking me to do? I'm going to be able to either create, delete, update some sort of infrastructure. And I'm going to do the diff. And I'm going to show you what's about to happen. In the case of what we're going to do now, we're going to do a dry run. So just using the Terraform plan command. So if I do Terraform plan, it's asking me to fill in. Okay. So there looks like there was a variable that was declared but not defined. So if there are situations which you've declared the variable and you haven't defined it in the config file, it's going to ask you to fill it in. So in this case, we're going to put prefix. It's saying a short string of lowercase letters or numbers. We're going to call this fed dev. And we're going to do a check. So now it's, whoops, that's not what I meant to do. Darn it. Let me go back to that. Now, I apparently can't go back in this one. Either way, it would have done the plan. Let me see if I can actually do a plan again. Terraform plan. There's a lock file. Let's see. All right. Well, we're going to just try to do a plan with lock equals false. I think because I actually went through too fast and hit the next button. There we go. That's what we're trying to show you earlier. Okay. So I added the prefix and then it said, hey, there's no state file. So now I'm going to go create some infrastructure. And in the state file that I'm going to compare to has something in it. And this is, this is all the infrastructure and the attributes of that infrastructure that I'm about to deploy. Right. So in this case, I'm creating a VPC. We're going to call the VPC hashy cat. I think the end goal here is we're going to actually create a cat application where we can see different cats in different on a website. In this case, we're going to use the cider block. So this is actually using the AWS VPC resource. In that resource, if you actually go to the registry here, you should be able to just do a search on AWS VPC. So if you ever are trying to figure out like, how do I deploy this? You can do either a module like the Terraform module for VPC or the Terraform resource. A lot of times I'll just do AWS VPC resource Terraform. And that usually gets me there faster than if I look for it anywhere else. So in this case, you can see at the bottom, all of the input variables that I have options to. So if I was going through the AWS console, I would still be asked a lot of these questions. In this case, I'm able to pass these in as attributes on the command line. So you can see here the module, we passed in cider block, ID, probably some tags. It actually added the FedDev VPC tag. And it tagged all of the resources in the state file as FedDev VPC US East 1. We're going to do a check. Okay, so I have to read, open the Terraform TFRs file and set your prefix variable by deleting the... Oh, okay. So they want me to do this by doing this to the Terraform TFRs file. And I'm going to take this out. So it was actually checking to see if I did this. And I did not. So I'm going to save this Terraform.TFR. Again, this is actually declared in my variables.TF. It's the first declared variable. And because I've declared it there, but I haven't put a same default, it's going to ask me. So I can override that with my Terraform.TFRs file. And that's what this is. So I uncommented that and I'm going to save it. I'm going to run the command one more time. Control C. Okay, so you can see I did the same thing. Now I'll check it one more time. Do a quick check. Phew. I think I'm back on track here. Again, just like we said before, you can override those variables that are in the variables.TF with anything in Terraform.TFRs. There's a order of precedence. It's a local file or a local variable, Terraform.TFRs and variables.TF. And also if you run something on the command line, you can actually type in Terraform plan and then you can pass in dash, dash variable that actually will override anything in the Terraform.TFRs file as well. Okay, in the previous challenge, we set our prefix variable in the Terraform.TFRs. Let's set another variable that will determine the location of where your AWS infrastructure will be deployed. So this is the region. First run another plan. So you'll be able to compare what happens after you change the location. So we copy that. We run a plan. Terraform plan dash lock equals false. All right. Add a region variable to your Terraform.TF. So I'm going to go back in my Terraform.TFRs file. And we're going to add a new one. We're going to call it region and we're going to, I like US East 2 because it is cheaper than US East 1 and way cheaper than US West 1. So I'm going to save that and run the same command. Once you've set your region variable, try running Terraform plan again. We did that. We should be able to see the difference now in our tags instead of US East 1 and now says US East 2. And US East 1 was the standard variable that you saw in the variables.TF file. So we're going to quick check. All right. We're going to do one more test here. Okay. Where are Terraform variables usually declared on the command line as environment variables in the variables.TF or in the Terraform.TFRs? Anybody know? What do you think? Yeah. In the variables.TF file, that's right. So the variable.TF is the instantiation of the declared variable. And then we'll just check that. All right. We did good here. But we can use all of those different options to overwrite what's already declared in the variables.TF file. Terraform graph can provide a visual representation of all of your infrastructure. This is handy when finding dependency issues or resources that will be affected by change. So if you're like me and you're a visual kind of geek, there are some tools. So this is we're going to go through this and there's some tools here that we can use with Terraform graph that will actually show us a visual representation. But I've actually heard of people doing things like UML or Mermaid to use Terraform graph and then actually apply it to an actual PNG file, create an image, excuse me, which is really cool. I think that's pretty neat. Terraform graph. So we're going to run this graph command and you can actually see the resources in a, I think this is UML. So it generates code that can be used to create a virtual map. The graph data is in .graph description language format. So .graph. In this case, we're going to learn more. We're going to use a tool called blast radius, which is a free tool and it can be found here at this GitHub repo. So first we have to start up that blast radius server. All right. Now we switch to the Terraform graph tab. Hopefully this works. It looks like it started up. And then we explore the Terraform graph. So we should be able to go over here and actually see a graph of all of our, so this is UML. It basically shows the variable and in the attributes of those variables. If we're using things like functions, you could see those as well. But as you start to kind of grow the size of this Terraform state file, you'll have a lot more resources. So you could have giant resource graphs. And then the good thing is that you can start seeing how dependencies, basically they use Terraform graph to actually do a Terraform plan or apply. And the way they do that is by first creating the graph and then it shows all the dependencies. So this is done internally within Terraform binary. And it knows which order to apply the infrastructure based on this Terraform graph. So for example, this variable is the Terraform provider, the AWS VPC is dependent on the Terraform provider. So it has to do that first. You know, these different variables are dependent on the resource. So you can see these and this Terraform graph is actually the order in which Terraform will go through and provision the infrastructure. So if you have dependencies, it'll always do the origin dependency first before it does the follow on the downstream resources or variables. Pretty cool tool. Again, these can be done in Mermaid now, I've seen a bunch of other ones. And the nice thing about Mermaid is you can render mermaid graphs in your in your Git repo. So if you're writing markdown, you can actually create, you can use this as part of your CICD workflow. You can generate an image, put in your markdown automatically. And then anytime you go to that repo, you can see exactly what's deployed. Kind of a cool feature. Okay, by default, the Terraform apply command runs a Terraform plan to show you what changes we want to make, right? So apply actually runs the Terraform graph, the Terraform validate, the Terraform plan. And then it says, Hey, this is what you're about to do. Are you sure you want to do this? And then it'll actually apply it out into the world. So again, we'll see our Terraform plan. Probably get this crazy error. I'm not trouble shooting on the fly here. I'm just going to keep going. So Terraform plan shows all the infrastructure we're about to provision. Terraform apply does the same thing. Terraform apply. And we're going to throw that flag on there. Whoops, Terraform apply, Terraform apply. All right. So the first step, it did the graph, it knew which order we're going to create the plan in. It shows us the resources that are going to be generated. It shows us that this plan is going to create one resource. It's going to, it's not going to change anything. It's not going to destroy anything. So do I want to do this? Yes, I absolutely do. And now it's going to go build it. The cool part is you get to actually watch it while it's happening. So if I was in the AWS console, if we actually had a tab with AWS console, I should be able to go to my VC VPC tab and actually see that being created. When this is done, like it is now, I can actually see what resources were created. It says apply complete resources, one added, zero change, zero destroyed. I can also type in a Terraform show. And you can see all the different attributes that I have. So the output variables can actually dig into this resource and grab attributes that are a part of this resources that were created. So right now we only have, we actually don't have any outputs. We just have, I have to do a Terraform show to get to those. But if I wanted to take one of these attributes and use them in a different workspace or a different state file, I would have to do a remote state to an output file that was in this. And I'll kind of explain that later. But basically I can put an outputs, I think we'll go through this, but an outputs file, create an output variable. And maybe I'd want to get the VPC ID. That's almost always what I do when I'm using AWS when I'm creating a VPC is I always want to have that set so that later on I can go grab it, especially if I'm doing things like creating an instance that's, or in a Kubernetes cluster or something like that, where I need to have access to that VPC ID. A little quick check. Okay. Terraform is an item potent application, meaning I can hit apply and I can hit apply again and I can hit apply again and nothing changes as long as I don't make a change to the config. So I can run and plan and apply as many times as I want. The good thing about this is if I want to do something like drift detection. So what drift detection means is I make a change in Terraform code and then Yash, my friend who I work with, goes in and he makes a change to the code. And, but he actually, he goes into the console and he makes a change to that same thing. He had the tag. The next time I go to run my planner or my, my apply, now there's a difference and I'm seeing it in my plan and going, Hey, I didn't do this in code. This isn't in my Git repo. What happened? I used to run a nightly job that would actually go out and build, that would actually go check that Terraform plan and it would actually tell me and page me if there was a change in the drift. Terraform cloud and Terraform enterprise now have drift detection built into the enterprise products. So you can actually see that drift. You can have it monitor you or alert you if things have changed, if somebody's gone in through the console, that way it's not going for too long. The problem in a lot of infrastructure is you make a change on Tuesday and you don't realize you're about to do, you don't realize you did anything, but then a month later you go to touch that infrastructure again and now you don't know why there's a change, right? So drift detection tells you right after you've done it in the console, it'll say, Hey, there's some drift. What happened here? And then you can kind of address it as soon as it happens versus a month later when you need to go touch that infrastructure again. Okay, so test and repair. Try running Terraform plan again and see what happens. Of course, since your VPC has already been built, Terraform will report that there are no changes, right? Now try running another apply. You should see the same thing. Terraform apply no changes. So I can run that apply as many times as I want and actually not have a change. So this is the nice thing about using drift detection is you can run this command over and over and then every night at midnight, you run this plan. And if there's no change, then you don't do anything. The job's successful. This is something that if you're doing testing, nightly tests, like we used to have nightly jobs, this is one of the big ones that we did. And if I'm writing code, I'm always going to write tests against that code to make sure that when I deploy my code, I know it's good, right? And I can test against that later on. So I don't have to have QA developers everywhere. I can have the QA developers write the tests and then that test can be run every night, nightly, hourly, whatever, or before I'm about to deploy this into production. The same thing goes for Terraform. Okay, so we can create, destroy, update in place, recreate your infrastructure. So one of the nice things about being able to use Terraform is your backups have to be, they can be different, right? I don't have to have these giant backups that store the state of my infrastructure. Instead, I can just back up my JSON file and recreate my infrastructure from scratch. It used to be that we'd take a backup and we put it on a disk somewhere and we'd store that disk in a folder or in a file cabinet somewhere and we'd use it if we needed to. Instead, we can just say, here's the steps that we took to get to that endpoint. And now we can just do a Terraform apply because we've codified those steps from beginning to end. And that's a 50K file versus a 300 gig disk that we're borrowing. So Terraform can create, destroy, update in place, always tries to match the current infrastructure to what's been defined in code. Okay, so we're going to do the same thing. We're going to edit the TFRs. We're going to change the prefix. We're going to add a new number to the end of it. So we're going to take our Terraform.TFRs file and we're going to call this, instead of FedDev, we're going to call it FedEc. That's my last name. I think that's appropriate. TFRs file. And then we're going to do a Terraform apply. Okay, now we should see there's a change that's about to happen. And this change is most likely just going to be a tag change. Right? So this is a great version. You can see here it's going from tag name FedDev to tag name FedEc. Again, it's my last name. And you can see that instead of adding something, we're just going to change it. Now, if I was actually going to change or destroy the VPC, say I just took that entire resource out of this main.TF file, it would actually go out and destroy that entire VPC. So we're going to say yes, we want to apply that infrastructure. So it's going to find the existing VPC, modify the tags. And I should be able to do a Terraform show and see that the tags have changed. You can see down here, they're no longer FedDev. Okay, so it's just saying it's a non-destructive action, so we could just modify something instead of destroying it. So now we're going to add a new tag to that VPC. So instead of just changing one that exists, we're going to create a new one. Read the Terraform documentation for AWS VPC. And we're going to go in here. So just so you know, again, you can go, if you look up Terraform resource or Terraform registry resource, and then the resource name, say you're going to create an S3 bucket. I would always in Google go Terraform S3 bucket resource. And it would bring you to the page to build a resource for S3. And it gives you basically the first part of this is your resource block. So let me just zoom in here. We're instantiating the fact that we're about to build a resource. The resource name is AWS VPC. That's the type of resource. And then the name that we're going to give it, in this case, we're going to call it main. And this little example here, we're only passing in one variable, which is the cider block. And then again, at the bottom here, you can actually see these are all the input variables that you can change. So these are all the different arguments, starting with cider block, instance, tendency, all kinds of crazy things you can add in here. A lot more than you'd actually get from the console. So the nice thing is, if you go into the console, they give you kind of like best practices. You know, here's the five or six things you probably want to change. But there are a ton of extra resources, the resource attributes that you can change. And those are all listed here in the Terraform. So you can actually get more out of Terraform that you could just going through the console UI. All right, so we went there, add a tag to your VPC resources in the main.tf. So in our main.tf, we're going to add an additional, what are we doing here? Key equals value. Read the examples carefully, unlike other resources. Talking and reading is hard for me. Okay, read the examples carefully. Unlike the other resource arguments you've seen, the value of tags argument must be a map. Okay, let's try to find an example in the documentation. All right, so name is equal to main. So we're going to add a new one in this VPC. And we're going to add conference is equal to opengov.com. And then we're going to save this. And then we're going to run a Terraform apply. All right, whoops, with our fancy dash lock equals false. This is not normal having to type that in. There's something going on because I skipped it earlier that I don't feel like troubleshooting live. So, okay, so you can see here, resource AWS VPC, we went from just having the name to now having conference as an additional tag that we can add here. Again, tags are super important. Why do you need tags? Again, if I'm going to provision infrastructure for a team, maybe I want to have a team flag, or I have an organization ID, maybe I want to have an organization flag or a role or an environment, or maybe even, you can take all these different attributes and then actually apply billing based on those attributes as well. So if this was a project that I was working on, and I wanted everything in that resource block to have these tags applied to it so that later on I could figure out how much money I'm spending on all these resources, I can go to the AWS billing and then do queries on the different tags. So I can build nice billing out statements so that if I wanted to charge back to my customer, I could do that through my AWS billing. So we're going to hit yes here, and we're going to apply. This was built at opengov.com. So we got a new conference. So if I ever needed to know who to charge, I would charge opengov.com here at Linux Foundation. Yeah, that's good. All right. So we got that changed. We'll check that real quick. Incorrect. Did it actually ask me for environment? I didn't type it in. Okay. Key value, and after environment. Oh, environment production. Man, I should probably try to read. Let's see. Save that and run it one more time. My apologies. Shell. Okay. Now we should see the change one more time. Do you want to do this? I sure do. You can also pass in a, you know, always say yes, enforce it. Okay. So we've changed it, and now I should be able to check because there's an environment tag now. And hopefully, it works, but it didn't. Okay. Perfect. Maybe because I have the other one. We're going to skip this. Is that why? Okay. I want to do it now that you said it. Save two attempts. Do you think we can do it this time? It's like a scorecard at the bottom here. Okay. Phew. Well, at least this worked, but ah, is it lowercase environment? Anyways, we'll just skip that. It looks good to me. It's probably because I have another tag in there that I wasn't looking for. Okay. Terraform code can be built incrementally, one or two resources at a time. So this is really nice. So when I first started using Terraform, that's kind of how I went about it. It was like, okay, well, I know that I need a VPC. So let's just build a VPC. I'd build a VPC. And then it's like, okay, well, what else do I need? Well, I need a server. All right. So how do I do that? Let's get the Terraform AWS instance resource. So I built that. And then that was dependent on a load balancer. So I build the load balancer. I start stacking all these different resources in one configuration file and then applying them one at a time. And then I could destroy the whole thing and reapply it and destroy all of the infrastructure and then recreate it. Okay. So I'm going to read this this time. Open the main.tf file again and uncomment the next resource block in the file. Okay. So it's going to basically go to the main.tf. They created a AWS subnet here that we need to access. I think I can do this. Nope. Okay. So now I have another resource. I can save this. Uncomment the code by removing the comments. Now we're on Terraform. Okay, cool. And I save it. I don't remember saving it, but we'll try. Okay. So now we have the VPC. It says, hey, wait a minute. You've got a new resource in here. What do you want to create? So it's going to create this new subnet. It's going to be in the cider block from this VPC. We're tying it to this. You can see the VPC ID here. It's already created. So I'm going to hit yes, hit return. And again, I could go to the... If I had the console, I could bring it up and show you that now, hey, we've created some subnet groups and we've created a VPC. So we've changed it there. And that should check out perfectly. Okay. This is what I was telling you before. Are you sure that it keeps asking? You can type in dash auto dash approve. And it won't ask you that question. It'll just do it automatically. In the Terraform cloud or Terraform enterprise versions of these things, you can set it up so that every time you commit to like a Git repo, it'll automatically approve it or it'll actually wait and ask you in the UI. So you actually have to go in and manually approve it. So we'll do our Terraform plan. All right. So we know that we're going to... It looks like it uncommented something in the resource block. So we'll just take a look at that. Yep. So now we've got more stuff in here. They uncommented everything else that was in that file. So we're about to build all kinds of fun stuff. So this is just a plan. So we're going to say Terraform apply auto approve and I'll just add our lock in here. Lock equals false. Now it'll just go out and build it. It can take up to five minutes for the application to finish. I guess at the end of this, we should be able to bring up a web application in a new browser tab by clicking the URL in the cat app URL output. So this apparently creates a public URL that we're able to hit at the end of this that has a cat in it somehow. So this is going to be a surprise for everybody. So you can see here, it's actually go back in the code editor and kind of go through what we're doing here. So all right. So we've got... This is our provider block. A lot of times what you'll see is this top part, this Terraform provider and then the configuration for that provider. A lot of times I'll take that and put it into a providers.tf file that just have this information, but like I'll know right away that, hey, I need to make a change to the provider. I need to upgrade it. I'll do that in my providers.tf file. Then I'll just list all my resources, right in this case, the VPC, the subnet, security groups. These are all the different attributes I want to open. So it looks like they're going to build this hashy cat application and it's going to be available. We're going to be able to SSH to it, get to it by port 80 or port 443. We're going to create an AWS internet gateway, a route table to it. You can see what version of Ubuntu they're using, 18.04 still. They should probably upgrade that. We're creating a null resource. So what a null resource does for you is it allows you to do things like SSH to the server, right? So in this case, it's almost like we're taking over the role of a configuration management tool. So say a configuration management, something like Ansible or Chef or Puppet, where I create a provisioner block where I want to just SSH a file or SCP a file over to the new server that I built. I can do that with the null resource. I can say there's file provisioners. There's SSH provisioners so I can run commands instead of just pushing the file over. So a lot of times what I'll end up doing is pushing all the files over and then I'll run an SSH execution that'll actually just run the commands that I'm going to run on those files or execute the file that I pushed over. That's the remote exec provisioner. So you can actually see that's exactly what they did. They pushed the file over using SSH and then they run the app get command app update. They install the patchy on here and then it looks like they've got some variables that get passed into their Hashicat application and they're doing that also with SSH here. We have our private key and it looks like we have AWS key pair that that's another resource that we're getting out of AWS. So let's go back to the shell. Okay so we created all this. You can see that we have a Terraform output. This is the URL that we need to hit. If I wanted to see what else was created I could do a Terraform show and you can see the list of all of the things that are created. So here's the thing. I did it very similar. I was actually doing some hiring at my last company and one of the things that I asked everybody to do before we went through the interview is hey go write some Terraform code. Create a web server that tells me what your favorite coffee shop was and I want you to do it in cloud. It was just a normal like you can do it in cloud formation. You can do it in Terraform. You can do it in whatever tool you know Python whatever you wanted to build it in. Well did a great job. One of the guys came in did Terraform and another person wrote it in Python which is great. It was his tool of choice. The problem is is that when he left I got off that conversation with him he didn't leave me the he left me the code but I couldn't do anything with it because it was only a generator. I was only generating the infrastructure. I was not destroying it. With Terraform I've created all this infrastructure. I could go in here right now do a Terraform destroy, get rid of all the infrastructure and then bring it back up within another minute. So I have a Terraform update and apply and a delete all from the same command. This is one of the differences between Terraform and say like an Ansible. With Ansible I can create the code to provision the infrastructure but because there's no state file to manage that I then have to go write the code to delete that infrastructure. So what we usually say when we're talking about configuration management tools like Chef puppet Ansible is that Terraform we usually build the house and then Ansible all the configuration management we use to kind of put the dressing on the house like the siding and the window blinds and all that kind of stuff. So there is a place for both. We have a very good better together story with most configuration management tools. All right do a quick check. We should have gone to this URL. Good we have a cat. Welcome to FedEx app. I have a place with text of your own. Yeah world. It's pretty good. Pretty good app. All right let's see. We're going to do a Terraform graph again to see what's changed. I'm sure there's a lot more resources and variables out here. So we're going to start that blast radius server. Again I've not used the blast radius server but it actually seems kind of cool. Start up blast radius with this. Oh he spelt it wrong. Maybe. Is that what it says? Already in use. Yeah like reading is hard when talking. All right so we did that already and now we want to go see. It already did it for me. So this is the new Terraform graph right. Here's all the dependencies. It looks like a crazy dependency graph which it is. I don't find this to be very useful in general once it gets too big but it's kind of cool once you start seeing clusters you know what the dependencies are and the reason they call it blast radius is hey if I'm going to destroy a resource what other dependencies are there? What are the things that are that is that going to affect? So you could technically use this to just go pinpoint the instance that you're going to use and figure out all the different things that are dependent on that AWS instance right. That elastic IP has to be deleted. The association of the elastic IP you know you keep going up the stack here and you can see all the different things that are dependent on that instance. You'll check. It's quiz time. All right what happens when you run Terraform apply without specifying a plan file? Terraform runs without a plan. Terraform reads the previous plan and then applies it. Terraform runs a new plan right before the apply or none of the above. Anybody want to answer? Terraform runs a new plan right before that apply. That's what I'm going to guess. Hate to be wrong. Okay Terraform provisioners run once at creation time. They do not run subsequent applies because that first apply or the init actually downloads that provisioner and then everything after that is stored in the state file. So they do not run subsequent applies except in special circumstances like this train lab because in between the last challenge and this challenge they probably destroyed it and recreated it. So yeah it looks like we have a new set of infrastructure. We have the cal say. I don't know if you've seen this before. This is kind of funny. Oops. We're going to install apt get install cal say. Okay cal say move. There you go. So we've got a little cal asky art that says move. That's really fun. Okay so Terraform apply we're going to approve that. What am I doing here? After copying them to your buffer it'll be easier to paste them in. Oh okay so we're going to add this to our Terraform as an output. Scroll down so you find the remote exec provisioner block and then following two lines at the end of the inline. Okay so we're going to go to the main.cf. We're going to go to the bottom here. Where is it? Did they want it to go remote exec? Okay and then we're going to say yeah let's see. All my vscode vm vm bindings aren't working for some reason. Okay apt-y install not j su then cal say move. Okay I want to save that and after copying them into your buffer it'll be easier to paste them blah blah blah and we're going to do a little bit of Terraform format which I like. I'm missing item separator. I have a comma is that what it is? Oh yeah I need commas in here. Those commas get me and I think with HCL I can put one on the last one which is really weird if you're into JSON but I think it's also unnecessary which is kind of funny. All right so that probably changed yep that's good all right. Terraform apply so now in our Terraform it's going to go run that and we'll see it which is kind of cool. So how many people here have actually used Terraform? Okay are you guys developers? Okay no are you on the infrastructure side? What kind of applications are you building? Oh fun okay so you still need to build infrastructure for that AI for jobs that are running very cool. Okay so this is going to take a second again it's installing the cal say app up look at that beautiful now we can see a cal saying move in our Terraform output which is fun so let's go to the cat app URL one more time the cat's still there and we have a cal saying move we'll check that so basically what we've gone over so far is input variables output variables resources providers our Terraform graph our Terraform format the different different steps that happen when you do a Terraform plan the state file these are all very important when you're working through all these all this Terraform it's saying Terraform can mix text along with Terraform data in your outputs outputs can be used to convey useful information very good Terraform refresh command will sync your state file with what exists in your infrastructure a refresh command will not change your infrastructure so it just goes out and sees what else is out there and then Terraform output is if you want to okay so I just did a Terraform show that showed me all the very the values of the attributes of the resources that are created you can actually specify a specific Terraform output for one attribute and then you can take that attribute the in the path to get to that attribute and then create an output variable that you can use later on the outputs tab so this is we're actually going to go through and edit the outputs file so in the outputs dot tf right now we have the cat app URL which we said before and a second output for this is really nice when you're writing I don't know if you guys have used the git what is it what's the new AI git thing co-pilot that things so cool but like if you're if you just type in output right here it'll fill out the whole thing for you it even thinks about what you're about to type which is really weird I think my theory on that is that it's actually looking at your paste buffer and it knows what you're about to paste and then it like fills out the the information it's really crazy anyways output we're going to name it cat app IP and then we're going to open bracket close bracket value is equal to oh they're going to make me go look for a public IP name your output public underscore IP is that right I don't think that yeah I think it actually has to point to the resource which is yeah terraform show I want you wanted the public IP is that what I asked for public IP web server uh-huh oh right yeah okay but public IP yeah yeah so so just kind of go back here if you go through our terraform show command we want to get access to the hashy cat resource that we created right so and then the public IP is so security group route table so this is how you do it you look for the actual resource in the name I'm trying to find the actual instance here instance here you go eight of us instance hashy cat and then there'll be an IP in here here you go private IP public IP in this case we're going to use the public public IP now what you oops what you see up here is that we actually wanted to take that information and then put it into a string so we use the you know the context of a variable so the dollar sign brackets and then the actual value because we're not actually this isn't a string we can actually call it directly so we can say AWS EIP hashy cat public IP and we'll save it and then do our terraform apply again our terraform refresh so I had to do it apply so the refresh is actually going out and looking through all of the resources that were created and now you can see the IP is now set up here um and that should be good and we can actually do a terraform output to just show us the output instead of all the extra stuff all right so we terraform show terraform refresh and terraform output or additional commands you can use um with terraform binary okay we talked about order of precedence before we'll kind of go over it one more time um the the variables have five levels of precedence like one to five right so the first one is if I run the terraform command I can throw in the variables flag the command line argument so I can say dash dash variable you know name equals value uh I can use the configuration file your terraform dot tf vars file environment variable so I can actually call that from you know in bash or shell script and environment variable and set that as part of your um your shell environment so I can do a capital tf underscore var and then the actual variable name I can edit the variables dot tf and then if anything else I can just replace it with a manual entry here are some other fun placeholder sites you can try with the placeholder variable okay let's just I don't know what what this means but oh okay I got it so these are cat images got it okay fun with variables there are several ways to configure terraform variables so far we've been using terraform the tfr so again we're going to pass in it looks like a variable from the command line and you'll you'll see that yeah we've already called this in the terraform tf vars file but now we're going to call this dash lock equals false and we're going to see that these get applied over the vars file that's already created and then we'll set the environment variable the terraform can read so again instead of passing in the variable file I'm going to actually pass in an environment variable from like the shell command right so if I want to set a variable in bash or in shell script I can say export variable name equals in the value if we put if we precede that value with tf underscore var equals and then the name of the the variable that'll actually replace what's set up in your variables that tf file again lower in precedence than if you're going to do this with the variables flag so we're going to do this so we're setting on the environment variable and then we're going to do a terraform apply again with my placeholder is now going to be place dog net so I think we'll probably get a dog image actually if we go to place dog net I feel like we're going to get a dog yes we'll get one of these images in our next hashy cat app okay so it's going to run this again and you can see the difference in precedence there right so we click on our app here and in our cat app we have a picture of a white dog which I like dogs better than cats anyway so I'm good although I have more goats than I do dogs all right another quiz so terraform variables you have same variables set in your tfr's file and you have an environment variable which one takes precedence anybody hands remember tfr's or environment variable higher precedence did I hear something lower precedence so technically your tfr's actually no I'm sorry I was reading that wrong yes you're right see that's what I meant it's the precedence goes tfr's file is a higher precedence than the variables file your value that you pass in on the terraform command line is a higher precedence so but the environment variable sits below that terraform dot tfr's file terraform cloud remote state storage is free for all you so if you're using terraform cloud we're actually going to open up a terraform cloud account here terraform cloud remote state when we talk about remote state that means I can when I do that apply a state file is created locally so I can actually show you that state file this one's generated automatically so I can just cat this file you can see these are all the resources in json format I don't know if they have jq but cat pipe to jq whoops jq dash r hey now I can see it in green so you can this is a state file so I can either manage it here or I can manage it remotely and when I say remote I can use things like s3 I can use dynamo db to manage the locks on that s3 bucket but terraform cloud allows you to store that state file and manage the state file and the different attributes of the state file in terraform cloud and when you do that it actually adds a bunch of additional metadata to do health differences and all that kind of stuff okay so we're going to go I've already pre-created I have an app that terraform dot io we're going to create a new organization in that terraform org and then we're going to be prompted to create a new workspace so it's the hierarchy in terraform cloud goes organization and then you have multiple workspaces that are in an organization there's also now a new thing called the project so you can have organization you can have multiple projects and then workspaces can be assigned to a project so say you had a network team a instant you know the server team the ai team and the database team and they all are part of the same project but different people have different attributes the workspaces can be managed based on the team you're on and then but they can all be a part of a larger project but I think they're using an older version so just in this case we're going to do organization and then workspace and then so we're going to we're going to do this I just happen to have a log in here so this will make it a little bit faster I have all kinds of organizations if cam banowski is watching I'll get to see she bash yo organization okay so let's create a new organization called hashy cat you want to make sure I got this hashy cat dash aws why no organization called dan dash training so we'll call it fed tech fed dev dash training at my email address create organization so now I have a new organization the first thing you need when you're in an organization is a workspace so there's a couple ways you can do organization workspace management you can either do it through a cli api driven workflow or version control meaning if I write code and I apply it to my get repo and it's on a certain branch or if it's in a certain directory on a certain branch the version control workflow will pick it up and automatically deploy that infrastructure out into the world we are I always use the vcs so this is going to take me a second cli driven workflow that's what we're going to go through now all right and then we're going to name this and I think it said hashy cat dash aws is the workspace new organization cli driven note if you already have a hashy cat aws workspace please delete it which we did not do it's fine this doesn't talk about projects again I can apply this workspace and add it to a project but I'm going to in this case just add it to the default project and then I'm going to call it hashy dog and I'm going to create the workspace okay so now I've got this hierarchy we've got organizations and we have workspaces teams different teams have access to different workspaces right so if I'm on the network team I might have my workspace for building vpcs out if I'm on the database team I might have my workspace for building out a specific database if I'm running a job for ai I would have the ai workspace and I would be able to provision infrastructure around the job that I'm running okay cli driven workspace so you can actually see here how we're going to take we're going to put this in our this configuration block in our main.tf I would put this in maybe my back end I usually call it the back end.tf this basically shows us how where we're going to store our remote state and we're saying in this case we're going to put it in terraform cloud we're going to put it in the fed dev training organization and we're going to apply it to the workspace hashy cat dash aws so now I'll be able to manage the state within terraform cloud if I was actually running this through vcs you can actually see the version controlled github repo right here I could click through it and see the changes that are happening in this case we're not so we're just going to take this example code we're going to do terraform version okay all right what does it say recreate as above doing this avoids possible problems mismatch state execution mode oh okay so in here in your general settings in our terraform cloud wherever that is there you go in our general settings we want to set our remote state to local that's what it said execution mode to local execution mode is set to local now I talked about output variables variables before if I wanted to share the output variables of this workspace with another workspace I could do that I can also say share it with whoever wants this information so one reason I want to do that say if I had a vpc that I built but I want to share it with all of my workspaces because they're all going to be in this workspace or all this vpc I would just say share with share with all workspaces my vpc so that's that and I'm going to hit save so I miss anything else do this avoids possible problems mismatch state oh it wanted me to do version okay so we're going to do a quick check okay with local execution mode terraform commands and variables all remain in your workstation so I'll make the code changes locally on my in my shell and then when I type in terraform apply they'll remain on my workspace my workstation with remote execution mode terraform runs in the cloud so the actual runner is no longer on my laptop it's actually running on a remote runner that terraform cloud actually runs or manages and then they're basically a container environment that spins up all variables must be configured in the cloud environment so in terraform in the cloud environment you can actually go to the workspace go to settings um oh I don't have remote states turned on which is why I don't see any variables but normally you'd have variables set up right here which we're probably going to do here in a second okay so now we're going to edit the remote backend.tf file and we're going to replace your organization placeholder with the org name that we created so we're going to go into the remote backend.tf and we're going to swap this guy out with the name that I gave it which was feddev-training we're going to save that file all right now we're going to generate a new user token so we have to go to this URL to get a new token so now what I'm doing here is similar to what you'd see um with your AWS resources if I want to do something locally and I want to interact with terraform I need to have a credentials.tf file so I'm going to go first create a token in terraform you can see I've created quite a few of them actually I'm not going to create another one because I've got one already created and I will bring up my terminal credentials.tf file so this is what the file looks like um you can see the token there great now you guys can all run I will definitely delete that as we speak probably should have done that before cool so actually I'm going to create a new api token and not show you and show you that uh so we're going to call this the hashicat user token generate token and we're going to copy that and we're going to go into my terraform file it's actually going to have us log in so um select the credentials file tab open the credentials file which is in here is there a tab oh there you go thank you direct it or edit it directly like that we're going to save that all right place your token with that and now you'll see it in the credentials file terraform credentials all right we're going to do a terraform in it all right and we're supposed to get that right I must have missed that uploading state conflict this workspace is not locked I'm in a weird a new invite me eventually um terraform apply oh all right no such file but it exists okay it's no longer the terraform knit open terraform this is file directory I know it doesn't exist because I just killed it so if this doesn't work I'm going to keep moving lock equals so this is not something we normally deal with um this is I think part of the lab here which needs to be updated so now I've got a state conflict so we're going to just skip past this um and move on to the next section here because I don't feel like dealing with troubleshooting okay so terraform can destroy that infrastructure that we just built right so we want to be able to destroy that hopefully we're in a state we can now that I've killed the terraform that lock file form we have the state file doesn't matter so I'm going to terraform destroy it's false hopefully yeah I'm going to keep running into this which is awesome skip this so terraform destroy would go through and destroy all my infrastructure which I'm going to have it do on its own okay so that's it that's all the terraform again what did we go over we went through all the different commands with terraform right terraform validate terraform format terraform graph terraform show terraform output we we showed how to build infrastructure we talked about the input variables the output variables how to link output variables from one workspace to another workspace the I felt like we could have done a better a better job with the lab as far as getting to terraform cloud but this is more of an open source thing anyway so for the most part we went over most of the terraform open source commands that we can use so that is the first lab which is infrastructure introduction to terraform