 So today's talk is about taming infrastructure workflow at scale. And this talk kind of walks you through of what has changed, uh, since, you know, I guess maybe 30 years ago since the mainframes and then talks about what, where we are today. And, uh, there's live demos at the end. So there's actually like three or four live demos, uh, which we'll see, you know, if the demo gods are with me at the conference, wifi holds up, everything works perfectly fine as I rehearsed last night at three AM, it'd be good to go, right? That's, that's kind of the, the goal here. So I'll keep looking at different people in the room to kind of figure out whether I'm doing well. If I'm not doing well, then I'll just close my laptop and leave if that's okay as well. And that, that, that did, that joke did not fly. Really? I rehearsed it so well. Why didn't it fly? Okay. It's a bit about me. Uh, my name is Annuba Mishra. I'm the technical advisor to the CTO at Hashi Corp. Uh, my background is in software engineering and then, um, the last five years have been SI in operations. And now as this really overloaded title, I get to work on a lot of open source, uh, open source technologies. Uh, my Twitter handle is build 1.0. It's very creative. Uh, I came, I was in the shower and that's when I came up with this. So just some intimate detail about that. I do not live here. I live in Vancouver, BC in Canada. I moved there eight years ago. I miss India a lot. And this is my first, uh, conference in India, which I'm really excited to speak at. I've spoken at many different conferences, uh, but this one's a bit special because I think, um, I really wanted to do root com for a while. I think Xana, who's the organizer and thanks to her for putting up this conference, um, I think contacted me many years ago and I couldn't make it. So this year, hopefully we can, we can make this work if you are with me. So I live in Vancouver, which is also called rain cover because it rains pretty much like maybe like eight months a year. So with this shirt, I'm making up for this, you know, the gloomy weather. All those six months of like rain, I'm making up with this like really nice flowery shirt being on stage here. Uh, so I love open source. Uh, I have built a bunch of open source tools, uh, the ones that people might know is one of the tools called Atlantis, uh, which was built, uh, by me when I was at Hootsuite, this helps you kind of collaborate on infrastructure as code using Terraform. I also work in the Kubernetes community a lot. I speak at multiple conferences, cube cons and things like that. Uh, but the code contributions are, uh, in the vault, um, in the security area, so, uh, around the CSI interface, the container storage interface. I've built, uh, some of the integrations there. I'm also a provider maintainer for virtual kubelet. I'm more recently working on dapper, which is a project open source by Microsoft. Uh, it helps you kind of build my distributor, microservices and the cloud and the edge. And then you might recognize my voice from the Hashicast podcast. If you don't listen to it, subscribe today. This is my shameless plug. And this is like the best podcast you'll hear my voice. You know, my voice is so nice soothing. I'm also writing a book on security. It's coming out next year. Um, I, I was too bored with my life. So I decided to do a book as well, uh, do not write a book. That's, that's my advice. Do not write a book. That's, that's the key advice I can get. All right. So I work for Hashicarp. Who here knows, uh, about Hashicarp as a company, I guess. You might know the tools. We're also the sponsors for this conference, uh, just a plug there. So here's the six, six open source projects that we built. We are an open source company. Uh, we build tools that help you deliver application software, uh, to production from like the first line of code all the way up. Uh, I'll just name the tools and, uh, kind of talk about a bit, small snippet on what they're about and has some of them have, have something to do with the talk, but most of them don't. Uh, so Vigrid, it's, it, this is our developer environment tool. Helps you kind of build developer environments and share them. And then Packer is a server image builder or I guess machine image builder. It helps you build machine images for Amazon, Google, any club provider. Terraform, which is today's talk, uh, a lot of today's talk is Terraform and this is an infrastructure as code tool. Vault is a security tool. Helps you kind of manage a secure, uh, secure secret centrally. And then you have Nomad, which is akin to Kubernetes. It's just a cluster scheduler. It's not as elaborate as Kubernetes, but it's just focuses on scheduling then console, which is a service mesh. It helps you kind of connect different applications together. And this is how they look like when you think about provisioning, you see Terraform. When you think about security, you see vault. Connect console run Nomad. This was the shameless black slide. All right. Now to the talk. Okay. So now I want to start with, uh, where it all began. This is like around 30 years ago when we had, uh, the, the era of mainframes. I don't know how many of you remember, but I've dealt with mainframes, even though I'm young, I know, even though I have gray hair that kind of gives, it's very confusing for people. They look at my gray hair, then look at my face. Like, how old is he? You know, like they, they always, always curious, but yeah, I have dealt with mainframes in my lifetime. Unfortunately. Uh, but yeah, this was the kind of the era of mainframes 30 years ago when one machine would do one thing really well, um, and you know, it was very efficient at the doing that one thing. And then you would have multiple mainframes. We did different types of things. Uh, and these are really big and loud and might be in a basement or a closet or something like that. Uh, then 20 years ago, what happened is this, you know, we wanted to do more things, right? We wanted to do more, more with our compute. So we added a different, added a few more servers, but the servers became smaller. So they became smaller and leaner. Um, I think the same amount of, uh, in terms of like loudness, the same amount of loudness, but they definitely became smaller. So it was easier to rack them up and you had to figure out cooling, networking, routing, all that stuff had to be done. So that's kind of 20 years ago. And this is the era where like Dell dominated this space, right? You probably know this, uh, with blade servers and things like that. And then there was the rise of the higher level web programming languages like Ruby, you know, Python, we all recognize this more recently, no JS and things like that. Uh, what was interesting here is that we started running, you know, prior to this, we were mapping one application with one server, but then with the higher programming languages, the web programming languages, we started mapping multiple applications on the same box. So we would do a Ruby application alongside PHP or you would do a Python, uh, you know, Python application with another Python application using things like virtual M or something like that. This is not far, it doesn't feel that far out, but it's still, it's still, uh, it's still, you know, used in, in the industry. And then what happened is we entered this around 10 years ago, the, with the era of like the fight, the hypervisor war. I don't know if you remember this. This is like hyper V versus a V inverse. Yes. Excess sigh. Yeah. Thanks. Thanks for pronouncing that. Do you want to do the talk as well? I can do it. What I mean, I'm just kidding. I'm just kidding. But yeah, like, but, but yeah. So this, the, the hypervisor waters, right? What came out of it was virtual machines, which was, I think very, very important for our era as well. Um, with the idea here was that you would get something that's a kind of a built artifact of your application within hours and you would deploy it, uh, on, on a bunch of machines and, you know, they would co locate different applications together, but isolate them with that OS boundary and that was fine with us. Uh, but they, they were hypervised PMs. They weren't running on bare metal, but this was good enough, you know, we went from like a mainframe to this, we were really happy. Then came these things, right? These things that, you know, I'm, I see, I immediately saw some people like, oh my God, he's going to talk about containers. No, as a container is, you know, you have like darkened rocket, right? These are the, these are the, the kind of the technologies that, you know, that have dominated in the, in the past, maybe like five years or so and they've come mainframe, they've come mainstream, especially Docker, right? Um, unfortunately, more recently, I think they got acquired by Berantis or something. This is interesting. Uh, but in, in terms of virtual, what came out of that was these containers. Again, we get this artifact that was deliverable, but this was within seconds, you could kind of have, uh, an application running, which is pretty awesome. But then your, your data centers look like this. You had some virtual machines, you had some containers running somewhere, uh, initially, people just started using just Docker daemons, then they moved to Swarm that Mezos and Kubernetes, then, you know, different types of, you know, serverless technologies and things like that. But slowly we're like, you know, the data center does not look that simple, right? Then of course, the, the, the rise of Kubernetes that we all have witnessed in the past three or four years. Again, it becoming mainstream. So there's Kubernetes running alongside VMs, alongside containers and other things. And then if you're running in the cloud, you intend on using the cloud providers API. So you end up using, like, you know, for example, you might use a database that's managed RDS on Amazon. You might use, uh, Azure's Cosmos DB, you might use, uh, Google Spanner. And these are all managed database services that, that are kind of available today in the cloud. That's also part of your stack now. You might be using a managed DNS service. You might be using things like DNS simple, a CDN, like Cloudflare, you might use some serverless technologies like, you know, Lambda or, uh, Fargate or, I don't know, name. There's so many, right? Like the Google has Cloud Run, ACI, there's so many, so many of those. So now this data center is looking very interesting. So you went from like very something very simple to this like mix of like, like almost like a heterogeneous set of things. And then you can't really reason, holistically reason about the system anymore. Like you can't just say, you know, here's where this thing runs, you know, you can't keep it in your head. You have to write it somewhere because this is getting too complicated, right? So, and to make matters worse, some companies run multiple of these data centers. So imagine like now this is getting very, very interesting. And this is realistic, right? Like you have companies that run 10 to 15 to 18 data centers around the world, which looks similar, that have the same type of stack. They might, you know, one stack might be more data, data oriented or like more stateful and the other will be more stateless. But still this is as much the complexity is kind of rising, right? And for people that are coming into the industry at this point, they're like, okay, you know, like we either enter, like we either just go full on, like, you know, to what the bleeding edges of some of us get stuck with the legacy, which is like running mainframes and a lot of banks still run mainframes, by the way. So you're running mainframes alongside, you know, servers and, you know, machines with like, you know, VMs and containers and things like that. So it's tricky to kind of navigate this through this landscape. So now let's talk about the workflow, right? What has changed in the workflow so far? Am I going too fast? Oh, it's fine. Let me know if I'm going, you should tell me because you can, you can do the talk as well. Whenever you want, let me know, just be like, yeah, I want to do the talk. I'll let you, I'll let you on stage. So here's, here's the kind of the workflow, right? First, we had to actually place an order for two to buy. So you had to place, go to vendor, place an order. It would take eight months to get to your place. You'd have to unpack it, then rack it, right? This used to be the thing. This used to be a process, right? Then the data center operations team, DC OS, no, DC ops, DC ops, that DC OS is mesosphere. I'm sorry, I don't know why I said that. DC ops people, they are the ones that actually like, you know, would stack the servers up, would install the first operating system, install maybe some routes and things like that to access them and then hand them over to people that are in the kind of the system admins or IT administrator that will install VMware and then get, get it up and running. And this whole process took about six to eight months. And these estimates are industry-wide. This is a kind of an average, you can say, moving average. So for the software developers, it became interesting. They had to have eight months, leave it always. We're going to do this new application. Okay, eight months before you have to plan everything and it worked for, for a long time. You know, and today things aren't the same. So what does buying look like today? You just sign up for a cloud provider. It takes you two minutes, right? You get all the compute that you want, which is pretty amazing. Yes, you don't have control over it, but you still get compute to deliver your applications, right? What does provisioning look like? Here's where I think a lot of innovation has happened. I think with, with config management, a lot of tools like Chef, Puppet, Ansible, SaltStack, you name it. They, they became mainstream and they would install all your package dependencies, you know, let's say you're running Java installed at the Java runtime and things like that. And then they would get the machine ready within hours. And this was really, you know, really big, based on like the estimates that we had before. And then you, when it comes to updating and destroying anything, this is very interesting. Now it's only a cloud provider call away with one stupid cloud provider API call. You can kill hundreds of servers and you'll be like, shit, I forgot to like put and at the end of my basket or something. And you're like, shit, you know, now this is deleted the whole infrastructure. But, you know, this could happen, right? And the, the idea here is that, you know, now within seconds we can deprecate anything you want, which is pretty cool. Right. And so this is the world we're living in presently. But in terms of the makeup of the whole software delivery pipeline, there's all these other things like you need Jenkins, you need some form of CI system, you need a VCS provider, GitHub, GitHub Enterprise, GitLab, BitPocket. You need some form of platform like Kubernetes, Amazos or something like that. And then all of these things come together to give you, give you what you want. But all these things need to be managed as well. It's not like, yes, you can pay for the service and buy it. But you still need to manage like what gets on it. How does it get on it? How do you define those configuration and things like that? So it's not as straightforward. So one of the most prevalent, you know, workflows, I what I name it is CWD, it is like CDW is that the console driven workflow, which has been very popular in startups and mid-sized companies. I did this, by the way. So this is point and click. You do everything point and click no bash strip, nothing, no, no, no touching the code. So this is the world famous Amazon UI, which you probably see, I'll move here. You know, the Amazon UI, you would actually, you know, have a bunch of things running on there. So in order to launch an instance, you go launch an instance, you select and a server image, like your bunter or something, then you say, OK, security group rules. Oh, who cares about that? Open 22 to the world, open 80 to the world, 443 to the world, and you launch the instance. Who cares about that? Right. And then it when it comes down to installing anything, you SSH in, you app get install, first app get update, no, actually try to app get install and then errors, then you say, shit, I didn't app get update. So you do app get update, then you install, you know, you would install Java or something like that. And then finally the machine's ready, right? This is how we install stuff. And this is not repeatable, by the way, none of this thing is repeatable. So now let's say your startup becomes that 1% that's successful, that has those millions of users and things like that. Then the operations team is like, shit. Now, what do we do now? Right. So then they use this amazing cloud provider feature called launch more like this. All you have to do is click launch more like this and you get replicas of the same box. And then you SSH in and you snowflake some changes and I've done this to be honest in bunch of startup companies that I'm not proud of. Is this live streamed? Oh, no, this is not good. OK, so either way, so the ideas to launch more like this and then by the time you reach the 50th server, it looks completely different from the first server, right? Like it's just completely different. And that's how the kind of the world is in this in this era, right? And then when it comes to CLI tools, you know, each cloud provider provides you with, you know, their own CLI tool like AZ or AWS CLI tool, you you wrap it in bash scripts or plural scripts or something like that. And then, you know, you launch it and which is great, you know, it works out, it works up to a certain scale. And then slowly you realize, you know, your bash scripts can't scale after 1000 lines. Then you're like, OK, now what after 1000 lines is really hard to find, add new feature or anything like that. Amazon releases a new feature. Then they're like your developers are like we want to use kinesis and you realize shit like we have to add supports to this bash script. It becomes really difficult to scale these things, right? So when I came to the industry, I was really confused. I was like what is going on, man? Like I'm writing these like plural scripts or managing them at least. And I'm like working on these like point click deployment thing. And then our company is hitting these hyperscale like how do I deal with this stuff, right? And it becomes challenging, right? So why did all this become so complicated so quickly, right? The main thing was that we needed to deliver applications really fast, like the rapid pace of delivery is one of the criteria. The second thing is we wanted to do it at scale. And the problem right now is that a lot of startups when they launch, they want that Google scale right away. I don't know why they think that immediately they'll hit that, you know, 100 million users or whatever. But it doesn't work like that. But they need it and that's a requirement and that's why they choose the cloud and so on. And that's fine if they feel that that's the case. But I think you can still build realistic solutions, you know, and then build upon them if you if the foundation is okay. So what can be an ideal workflow? The whole talk is about this workflow thing. What would be an ideal workflow? And this is where I'll give an example of a bunch of things that we have learned as a company at HashiCode. So describe everything in code. That's key even infrastructure code. So we've been so successful with software for so many years. Why not bring that to infrastructure as well and write everything in code. Once it's in code, you get to steal everything. You get to steal everything from software engineering that we've learned over the past, you know, 25, 30 years, you know, including version control ability to share the code in your organization, make libraries out of them, make predictable changes and you'll see how and then also do fast provisioning analysis to talk about like how that plays into this whole, you know, kind of criteria. Some of the questions that we as a company asked at HashiCode before we created tools around this is that how do we provision resources amongst compute networking and storage? How do we manage the lifecycle? It's not it's not good enough to just create things and leave them, right? Like they'll get modified, they'll get involved, you know, you acquire companies, then you'll have to figure out how to interface with them. And you know, all those things are real challenges for enterprises and mid-sized companies. And then also deprecate them once the service is, you know, done its life, you know, a new service comes by and replaces the old service. How do you deprecate things properly? And then also enforce policies. This is really important, not for startups. You can still do your thing, you know, just have access, fruit access to everyone. But as you go to enterprises, yeah, it becomes difficult, right? Like when I say enterprise, okay, I should use, I should clarify here. So what I mean by enterprises is a large organization like let's say with 10,000 people or something like that and above. So yeah, that's when you kind of, you know, you have to do sock to compliance and ISO and stuff like that. You need to make sure that that's all, you know, good to go and you have audit trial trails and things like that. And also share stuff. Of course, we need to share all the code. We don't need to write it every time we do new things, things like that. Okay, so here's where I'll give an example. I'll use Terraform as an example throughout the talk. This is an open source tool, of course. I want to make sure this does not come out to be a vendor pitch. So here's the, here's the thing. This, most of these things that I'm talking about can be used with other tools as well that are in the space. And the key here is to kind of show realistic examples. So that's why I think Terraform has been very prevalent for the last two or three years. So we have a lot of good interesting stories. So I want to share them. So who here knows about Terraform already? Okay, so I'll go faster. The first few slides a little bit, a little bit to kind of give context about Terraform itself. So the goal of Terraform is here to kind of create a unified workflow. And when I mean unified is that you should predictably change things over time. And then iterate over infrastructure safely. So you need to iterate over infrastructure, make predictable changes. You can't just, you know, change a security group and wait for how what, you know, if the customer call me or, you know, stuff like that, we don't want that, right? We want predictable changes. And also the capacity to provision or the capability to provision anywhere. And when I mean anywhere, when we created a tool, we weren't biased towards Amazon or Google or Azure. We wanted to create a tool that's pluggable so you can actually provision anything anywhere. Even open stack. So here's the syntax that we used to kind of, you know, do that. This could be anything. This could be JSON, YAML, whatever you want to write. But the idea here is that with Terraform you get HCL and it's also JSON compatible. But here's the language, right? The language has an idea of a resource and a type and a name and then a bunch of attributes for it, right? So this is a more realistic example. So here I have Azure Virtual Machine, which is called Web here and has a bunch of storage image references. This is real code, by the way. So you might be running Ubuntu 16.04. I know 18.04 is out. I know. I'm just still using an old example. And then you define things like the Ubuntu server and stuff. But then these specifications are given by Azure themselves. So you're not abstracting the whole server away and it becomes interesting how when you start doing that because you have to predict things. And you don't want to predict things. You want to make declarative changes rather than anatomic changes rather than kind of predicting things on the fly here. So on the Amazon side, it actually becomes very simple. You just provide an Amazon AMI ID or AMI. I don't know what you call it. I'm on the AMI cult. I represent AMI cult. I have a t-shirt that says only AMI. That's how it's pronounced. That's kind of the idea. So here's the kind of a realistic example of an Amazon instance, right? So it's pretty straightforward. This is how you kind of, it's very readable. This is where it gets interesting. So now I have two different resources. So I have a resource which is an Azure public IP address, which I'm defining, which is just an elastic IP in Amazon. And then what I'm doing is I'm using DNS simple that has nothing to do with Azure at all, but I'm referencing that public IP address in the config right there. So now what I've done is something really amazing. In the background, what has happened is Terraform creates a cyclic digraph and then it figures out all the dependencies and how they depend on each other. So when it comes time to execute and create things, it's intelligently automatically figuring out, okay, I need to create the IP address first and then create the DNS record because it's only available once that's created. So this is not done by you. You're not explicitly saying this resource depend on this resource. A lot of tools have to explicitly say like, create that and then you have to do the logic. No, you just write code and then let Terraform figure out the graph and execute it for you. And I'll try to execute things in parallel as much as possible. That's kind of the idea here. And then you can do this with other CDN providers. We have multiple CDN provider support. You can do things like GitHub memberships and this is interesting. This is what we use at HashiCorp and a bunch of other big Fortune 500 companies use this. So the idea here is to kind of give GitHub membership when you join using code. So you would create a pull request for a GitHub, so a new user joins the company, creates the pull request and then they're able to kind of, someone's able to prove, see through the permissions that they want, approve it and then actually do permissions using code and get access to the organization. And this could be to other systems as well but this is just one of the core systems I guess the VCS system. So this gives you kind of that audit trail that you need when it comes to enterprise audiences and things like that. So Terraform as people know is the stuff binary that you download and you run but in the background I'm going to kind of talk about how it works, right? So Terraform has a core which basically has three main things, three main packages. Once the config parser, config parser package and once the DAG which is this graph that I was talking about and once the schema which allows for pluggable kind of media between core and some plugins and I'll talk about how. So config parser just parses the config has no idea what provider, what's going on, nothing. The DAG is just a basically graph of resources. Again, it doesn't matter what cloud provider you're talking about and schema is that extension to do plugins, right? So plugins is where all the most of kind of this like cloud provider logic or like CDN service logic lies. So these providers are the ones that scale Terraform to kind of infinite possibilities. And so the plugins talk to the core and the core has no idea you're talking to AWS, the plugins doing all the work there. And then provisioners that like to like do run mass scripts or run Chef or run Papier or Ansible or whatever. That's kind of the idea here. So they're two different things. They used to be one thing before but now they're two different things. So in terms of kind of the providers we have support for like a lot. I couldn't fit them in this, oh sorry. I couldn't fit them in this slide. There's around hundreds. Like I couldn't even get the exact number because it's way too many now. But the reason why that's the case is because Terraform allows you to create a provider for anything, any API that has crowd support which is create, read, update and destroy. That's kind of the idea. So any API that implements these four functions you can write a provider for. So for example, if you're running OpenStack in private data centers you have a custom pass that does things to you. You can actually extend that pass and create a provider and then start using infrastructure as code directly using Terraform if you like. That's kind of the provider interface. And then the idea here is to describe things in text files, make the configuration human friendly. But we do support JSON and we're 100% JSON compatible and there's a reason for this. So when we initially launched Terraform, this was like 01 or 02, I think a lot of users gave us feedback as like, it's great that we can write these configuration ourselves. We love the syntax. But the problem is that a lot of us generate our infrastructure config using Python or Ruby and HCL is not good to work with when it comes to like a machine parsing it. So that's where JSON comes into mind. So you can write JSON, you can generate JSON, Terraform will consume the JSON and then match it up and do your infrastructure operations and it's 100% compatible going forward as well. And yeah, you get the benefit of storing with any VCS provider, get SVN if you're, I don't know, Perforce if you're unlucky and you use Perforce. And then you can track your history throughout as it evolves basically. Okay, so in terms of usage, this is not that useful slide, but there's like 4,000 so contributors, they get like 300,000 downloads a month, some million, two million or something like the three million a year. And then bunch of modules that you'll see in the next few slides, it's not that important. So the goal was to kind of do this, right? So now let's talk about safety, how do we solve safety? So the safety is solved by two different distinct phases, the plan phase and the apply phase. So the plan is kind of give you an idea of what's gonna happen before it actually happens. So here's a simple, it's fine if you can't read it at the back, it's a bunch of infrastructure code that we never care about, it's okay. So this is a public IP, a network interface in Azure, virtual machine and a DNS simple record. I'm mixing and matching here. I'm creating kind of a virtual machine and has like a network interface that has an IP and that IP has a DNS record. That's kind of the way to visualize this. So this is a very simple Terraform config, it's a working config. So what happens here is if you wanna kind of make changes and create these things, I'll run a Terraform plan and then the plan will show you what's gonna happen before it actually happens. And yes, I said it, I'm the speaker, I can quote myself. This is the privilege that I have. So now all you have to do is tweet about this quote that I said and I'm just saying don't tweet about it, it'll be very embarrassing. So the idea is that the plan shows you something like this, so it shows you like what's being added. So in this case, we are adding four resources. As you can see the addition sign here, we're adding a DNS record, virtual machine, network interface, public IP address. So if we are destroying things, you'll see a minus and if you're changing things, you'll see a tilde sign. But now the cool thing with this plan is that this plan, you can save it out in a file and you can share it in your organization. So you can share this plan, for example, you're a developer, you generate a plan, you share that with your operations team, ask them is this okay and they'll look at the code and be like, okay, these things are being added, looks good to me, they'll give you a thumbs up. And once they do give you a thumbs up, you can apply, go around Terraform apply and Terraform will create these resources for you. So before you do anything, so the plan operations only does reads, it doesn't do any changes to anything, it just reads what's out there, reads what it knows about and then does it diff and gives you an idea of what's gonna happen. So that's kind of the predictability. So before how did we do this, right? This is the question I always ask, well, how did we do this before? I know I did it, like it was like, you just cross the fingers, you switch the security group and then you just watch the traffic drop and be like, shit, I should have changed it back. So there was no idea of predicting changes before. There would be some idea, but I think it was not that definitive. Now we can actually see things in text and it's nice. Okay, so Terraform also has Terraform state which allows you to map real world resources to your configuration. So let's say you create an Azure virtual machine in the previous example. Terraform will basically come once it's created, commit it to its state and then manage the lifecycle of it, so kind of track it as it goes through from being created to modify it to destroy it and things like that. Okay, so now I'll go quick, is there only 14 minutes left? Okay, I'll be quick now. So Terraform state looks like this. The idea here is that we can store this somewhere. So usually you store it in a file or a storage bucket or something like that. And here's how it looks, but it gives you an idea of how things are dependent on each other. You can actually generate graphs in Terraform. I'll show you in a bit if I have time. Okay, so the state organization, this is where it becomes super effective. So you're able to kind of share the state in your organization. So let's say operations people manage the networking state and then they can share that networking state out to a service creator, a developer who's running like creating full or bar services. And then what can happen here is that they can only expose things that they want to. For example, they can expose the VPC ID or ACLs and things like that. And then the developers can actually switch between these states really easily using workspaces and we don't need to get into that. I'll go quick now. Okay, collaboration, collaboration is key here. So we need to share code. Therefore modules allow you to do that. It's kind of a way of like sharing libraries. You can look at it sharing libraries and kind of hide the complexity of the infrastructure in a way. So here's that same example again. So instead of using like literally like all the, you know, kind of providing every single bit, what I'll do is I'll change a few of these into variables and then this whole configuration becomes a module and the module allow you kind of just hide that complexity. So you can share this like a web server module in your organization and then they can use that to kind of create it directly without knowing what the intricacies of like Azure is, right? Which is pretty nice to make itself served. So basically you get set of inputs and set of outputs and inside there's like bunch of complexity in which you hide. How does this enable the D word? The idea here is that you do, you know, operations engineers will create these modules, right? And publish them and software engineers will consume them and create servers. That's kind of the simple idea there in terms of an organization. One important thing modules have versioning so you can actually, you know, have certain, you know, people use a certain version of the module and then when the operations team releases a new version with let's say, you know, the web server now has an elastic IP, you can kind of push that changes out to teams and ask the teams, hey, like do this, you know, this is how you should upgrade and things like that. So all of a sudden you don't just get, start breaking changes across the organization, which is nice. So this becomes really popular. A lot of companies have thousands of modules and it becomes so popular that like you have to figure out how to share and kind of structure the modules. So you have something called the core modules like VPC, networks, security groups, maybe network ACLs, anything that's core to your infrastructure and that's written by operations. Then you have service modules that like for example, the Java, you know, web, Java microservice, you know, module that gives you a Java web server that could use the core modules and then the service platform engineers can create these modules. And then you have set up consumers that just use it, you know, trusting that everyone up ahead has done their work basically, that's kind of the idea. Okay, now to the demo, I know I have less time. So this is a lot to go through. I have amazing demos by the way. So if these work, you have to clap. Like I'm serious, you have to clap. This is, I've spent like a lot of time on this stuff. Three, four hours last night till 3 a.m. That's the time I've spent on it. So I apologize. So here you see, I'll open source all the code and also open source the slides. So you don't need to take pictures or notes or anything like that. So here's a project I have called infrastructure. It has a few things here. Ignore the typos. Okay, so here's the first project, the networking thing that I was saying, the core module, the core project. So here what I'm doing is I want to create like let's say VPC. So I'm using, I'm using, let me actually use code. Visual Studio Code. Microsoft will be happy with me. Here we go. Okay, so if, I hope you can all see that. But here like there's this module. I'm providing the Amazon provider, but I'm creating this module called VPC and I'm using a source for the module. So once these modules are published, they look something like this. So in your GitHub, they might look something like this. You might have a module for VPC, one for web server, one for route, route 53 or something. And the modules have set of inputs and outputs. So in this case, let's see the inputs and outputs here. You don't have to use the Terraform registry, you can just use GitHub, but I'm just showing it's nicer output. So here's a bunch of inputs, bunch of them, a bunch of outputs and things like that. I'll go quick, this is not relevant. But yeah, here's kind of some of the definitions that you need to provide. And VPCs are complex and I'll show you how complex they are. But you all need to provide in this case is the private subnets, the public subnets using variables. I can show you those. They're here, right here. Just the cider blocks for them. And then providing things like tags and stuff like that. That's it. And once you do that, you can create them. I've already pre-created this VPC because of the time I knew I would be rushing at the end. So I won't run a plan. What I'll do is show you the graph, right? I think that's more interesting. So I'll do a Terraform graph and this is gonna be horrible. I'll save it in a file. This takes a few seconds if it works correctly because it's quite a big interesting set of resources and you'll see why. Okay, so graph is ready. I'll just open this up in code. I'll copy this and I'll paste this in this. Oh no, I'm giving away all the, okay. There we go, found it. I'll generate the graph. So this is digraph that you can kind of actually generate PNGs out of. So here's how it looks like, okay. And this is, oh my God. Let me zoom out. I won't be able to fit it in this screen. Okay, let me just go back. So yeah, it's very complex. Like you can see there's a bunch of stuff in it. But the idea here is that I did not need to generate this graph. I did not need to put it in my head, right? I need to just use the Terraform graph command and maybe generate a PNG and commit it to a Git if I wanted to. You can totally automate this thing. And you see an evolving graph over time of infrastructure, which is pretty cool. So that's kind of the graphing features. So now let's use this networking VPC layer to create a service. So here I have a service called BiCoffee. So my company name is Mishra Corp named after me. Of course, I'm very full of myself. I don't know why. So here's the BiCoffee service, right? So what I'm gonna do is I'm gonna import the state of the network layer that I created earlier. And I'll show you the state here. Where is this thing? Yeah, okay. So this is the kind of the state store. So I'll open this, expand this full screen. So here's the kind of the state file here. It has a bunch of JSON stuff, private subnet IDs and stuff. But I have a set of outputs that I'm creating that I used for creating these servers. So what I'll do is now import a module which allows me to create a web server. Which allows me to create a web server. This is another module that my ops team has published. So if I go back to my GitHub repo, you'll see this module right here. So that's the module. It has a set of inputs. Oh, I have to show you, sorry. I'm moving too many screens. This is what happens when you're pressed with time. Okay, here. So here's the inputs. Some of the inputs and outputs. So I'm abstracting away a lot of web server logic in AWS. Just think about it like that. So I'm providing, okay, five minutes, okay. So here's the, here's the, so what I need from the user is the release URL of where the binary is for the project and the name and the web server count and things like that. And that's all I really need. And don't worry about this inventory service right now. So, and then I'm creating a DNS entry using my route 53 module as well. So basically I'm creating a two tier application with maybe like 20 lines of code or something which is pretty nice and hiding all the complexity. So if I do that, I can, you can actually visit this service right now. It's the inventory, it's called bycoffee.live-demos.xyz if I wanna go on it. But the inventory service isn't working and that's running in Google. So I'll get it up and running again using the same predictable way. So I can go, I can do a Terraform plan here. This is my another project. Actually, I'll just go Terraform apply for time. No predictable changes. I just ought to prove it as well. So I don't have to say yes. So this will actually create the inventory service running in Kubernetes. And I'll quickly show you while this is creating. I'll quickly show you the service here. Hopefully show you what is this thing. Google Kubernetes bycoffee inventory. So yeah, this is a Kubernetes deployment actually. Again, I'm using, you know, the same syntax, the same workflow, but on a different platform. Kind of gives you that consistency that you need. So yeah, that created the service. So now if I go here, I'll see some of the coffees lined up, which is nice. This is cross clouds. So the servers are running in AWS. The inventory service is running Kubernetes in Google and they're connecting right now. They're on the public internet. So you can read everything between them. But the idea is to kind of use a VPN or use a service mesh or something to connect them. Okay, I'll move on to the next, the cool part of the demo. All right, this, I need time for this. All right, so let me open this guy here. All right, so let's say what, what if like, we could actually make changes in a way that like, it's beautiful, it's nice. You can collaborate on GitHub. You don't have to leave GitHub basically. So let's actually make trigger a change, right? So let me trigger the networking layer. Okay, so let me open this variables file, edit this and I'll just make fake changes here. So I'll just say, remove bunch of lines and create a branch and propose a file change. So now what will happen is, so far you're like, you're teaching us how to use Git now, hold on a second. Let me, let me get to the point. So as soon as it creates this pull request, what you'll see is there's a plan pending right here at the bottom, right? If you can't see at the back. So there's an Atlantis plan pending. So this is Atlantis working in the background. It's a, it's an application that listens on Git, with Git webhooks and then runs a telephone plan for you and submits the plan back into your GitHub repository. So the plan takes about like maybe like 20, 35 seconds or something, cause it actually calls. It does the same thing that I did on the CLI and then reports back kind of the status of this thing. So the benefit here is, is that you get, you get what you want, like you get the exact, time's up, okay? Just almost done. So you get what you want directly into the GitHub pull request and you see predictable changes on the pull request itself. So here I'm quickly pending and I'll see the file change here. So that's just the space that I added, but this could be a server being added or removed or something like that. And then in here, I hope this plan finishes, man. This is unfortunate. I really wanted to show this. Okay, either way, how about like, I'll do a quick video at the end and then I'll send it out on my Twitter and then you can read it. So yeah, and then the last thing I wanted to say is, I also can make predictable changes in JIRA if you want. So there is a JIRA provider that allows you to kind of create JIRA tickets. So let me open this quickly, quickly, quickly. Sorry, sorry. I know, I know, she's staring at me. I can feel her. Okay, so here's the JIRA provider. So here what I'm doing is actually creating an Amazon instance and adding a JIRA issue. So I know like the predictable, like the audit trail and stuff, a lot of auditors aren't happy with GitHub pull requests. They're like, no, show us a ticket. So what you can do is actually associate the output from a resource into a JIRA ticket and create a JIRA ticket if you want. And that's also totally done using code and you can still use the same workflow. So that's kind of the idea. I know I'm, it's unfortunate that I couldn't finish the demo, but thanks for coming to the talk. I appreciate it. Thank you.