 Okay, so we're gonna get started if I can't get there from my iPad, I'll eventually get there. So now we're gonna talk about Hashtagorps Zero Trust and with Zero Trust, we're gonna talk about Vault, Console and Boundary. What I'm gonna try to do is kinda go briefly go over what Zero Trust means to Hashtagorps. I'm gonna go over a little bit of Vault and then I'm gonna pause and we're gonna go through the first part of our lab and then we're gonna have a conversation about Console and Discuss Console through the presentation here and then we're gonna go back to the lab and work on that through the lab. So the next lab that we do is gonna basically build, we're gonna build a real simple application. It's a little bit different than the Hashtagat application, not as cool. However, it's gonna have some data in there that we're gonna load into the database and then we're going to try to, so in order for that application to register itself or to read the database, we need to have credentials. Where do we store those credentials? Can we make those credentials automatically be rotated? So we're gonna work through that with Vault and Console and then we're gonna register those applications into Console, into the service mesh and then we're gonna create encrypted, least privilege access between the different applications which we call the Data Viewer and the Database. Okay, looks like I'm all set up here. So, like we said before with Terraform and basically the multicloud world, we also have, basically in general, we have a problem with securing. I actually really like this icon, right? So we have the cloud, which is kind of funny. Then we have this perforated line around it. That perforated line is exactly what you see, right? So now we've moved out of the four walls of the data center, which used to be trusted, right? We used to have this trusted perimeter. We'd have a physical data center with one way into the data center and one way out of the data center. We had trusted access. So we either have a firewall rule of some sort. We'd have a VPN and everything on the inside of that boundary was secure because we knew that the only way to get into it was through this one moat. They call it the castle and moat approach to security. And as the boundaries of the data center have kind of exploded now into AWS and Azure and all the different environments, we have a ton of new APIs that we've opened up. So now we've got this perforated line between where the boundary of the data center has now exploded and now we have all these different ingress points, right? We have our threat window is open. And now in order to get to any of the resources, now we don't have a trusted perimeter. We don't have trusted access and we have unsecured data. Now we can say like, we've hit every single endpoint and we know exactly where everything is. But the truth is with cloud, we have so many endpoints that we just have to trust that we don't have a trusted perimeter, that we don't have trusted access and that we don't have secure data. So what if we were to say something like, trust nothing and authenticate and authorize every interaction and just pretend that they're already on there, bad actors are already in our environment. So with modern infrastructure, if we don't take care of it, if we don't actually deal with the authentication and the authorization and the protection of the data, we end up having a modern infrastructure that's scalable and dynamic, but there is no trusted perimeter, there's no trusted access and we have unsecured data. So think about the, we'll talk about the anatomy of a MITRE attack later. But the idea is, if somebody's on my laptop, if I have, I go to some ad blocker, accidentally hit the ad blocker when I'm trying to get rid of it. And now all of a sudden I have something on my laptop, somebody can get in, they can access local credentials on my laptop and then they can get into my network and now they can move laterally through that network and expatriate data. That's kind of the concept of the MITRE attack. What if we had to make sure that we're authenticating and authorize every time that we do an interaction? Whether we're a human and we're connecting to machines or services or we're a machine actually connecting to the other machines. So the concept of zero trust is it all starts with identity. If you remember the pillars of zero trust that we showed a little bit earlier in the government, that like the Roman Coliseum, the first thing that was there was identity and it does. Zero trust starts with identity. It used to be that in the past that we were just dependent on IP address. And we talked about this in our multi cloud conversation where the IP address didn't really move. So we just trusted that if it was on that IP, if it was on that server with that IP that we knew that that was the server, that was the web server or the database server that we were trying to connect to. So I'd create a firewall rule between the outside in that new web server. And if every six months I had to replace that web server, no big deal, I'll put a ticket in and the network guys will handle it. And we have three different types of identities, right? We have application identity. So this is our, in federal speak, we call it NPE, non-person entity, application identity. We have network identities and we have user identities. User identity would be something like a PE, a person entity. And along with these different types of identities, we add cloud identities. And so now we've got a thousand identity providers that we need to broker, right? So we've got Okta and LDAP and AWS and tokens and Jot tokens and Azure, Kerberos. We've got a million ways to identify what a service is, what a server is, who a human is, but how do we broker that? So we can use Vault. We can use Hashtagroup Vault in order to broker the different identities of all the different types of entities that we have, right? So application identity, network identity, user identity, cloud identity, we can use Vault as the core broker to establish who you are. So if we were going to secure access a data with this trusted identity, we can use Vault at the kind of core, the center of all of this, right? So, but what do I wanna access? I actually wanna access credentials. I wanna access encryption tokens. I wanna access PKI certificates so that I can establish the identity and then give basically the application or the software commit a specific PKI certificate that has specific information that we can then look up. Those credentials are behind Vault, but then we can use Vault as the authentication provider that brokers all the different types of identities. So applications and identity. We wanna ensure applications can access other applications and databases, right? So I'm an application. I'm a web server and today we're gonna kind of do this. We're gonna have the data view application. That data view application is gonna work and try to connect to a Postgres database. Well, if I wasn't using dynamic credentials, I'd use static credentials. So maybe I had my user ID and password and I wanted to store that somewhere outside of my configuration file. I could store it in Vault. So if I need to access that database, first I go to Vault. I request access to those credentials, then those credentials are provisioned to that web server and then I'm able to connect to the database. My connection string is created. I'm now able to connect to that database. I can also access encryption and decryption tokens and we'll go over that a little bit, but Vault can actually do transit encryption and tokenization of data, right? So I can not only encrypt data in flight, but at rest as well as tokenized data. So if I have a specific type of application data that I want to tokenize, meaning say I have PII, I have a credit card, I have a phone number, I have a name, but I wanna change those attributes and then convert them into something else. Say I have a credit card, I don't want credit card data out there. I can tokenize that data to still do AI, still do all kinds of reports on the types of data, but I'm not actually using the real data. I'm not using somebody's credit cards. I'm using a tokenized version of that credential. So Vault allows us to do all those things, which we'll talk about in a little bit. And then we have networking identity. Okay, so I have my application. I wanna register that application. How do I know I am who I say I am? So my application registers itself within a console and then I can create rules between applications based on the identity of that application, based on the PKI certificate, the issued certificate from the certificate authority, and I can create those rules between the applications, which we're gonna do a little bit today. Also with console, we're also encrypting the data. So not only have we created lease privilege access between web and database or application A and database B or web app, A and web app B, but I'm also creating, I'm doing lease privilege and I'm encrypting the data, right? So to the two applications, they look like they're connecting over local host and they're encrypted. So this is a little view of what are some of the use cases for console. At the top, we have multi-platform service discovery. A lot of the times when we have new applications, what we end up having is a legacy environment and they were saying, hey, I wanna go into Kubernetes. We'll have a hundred year old weapons platform and we wanna now go to Kubernetes, but 95% of our application stack is sitting on Windows servers or even further back as far as like, maybe not Raspberry Pi's, but some sort of strange infrastructure. So the idea is I wanna be able to put console on everything, right? I wanna be able to put console on a Windows machine, on a Linux machine, on a Kubernetes service sidecar, on a Raspberry Pi and register all my services into a global service registry. I wanna be able to have all the different applications discover where those applications are and as those dynamic IP addresses are changing around, I wanna be able to then create rules between those applications. Then we have a global service mesh and API and so the mesh is actually the encryption and the rules. So basically with a service mesh, we can do away with east to west bound load balancers. So for example, when I moved into the cloud, one of the things that I found I was doing a lot of was creating a scalable application and then I was putting a load balancer in front of that scalable application every single time. So I'd have 100 applications that were microservices with an AWS instance load balancer in front of an application load balancer and that was great except it cost us $50 every single time I spun up a new load balancer and that really stacked up as the amount of applications that stacked up. So in this case, we can use console to not only do the load balancing piece but also act as a firewall rule. So like I said before, my web application needs to talk to database. I'm able to then create intentions which are basically for lack of better terms, firewall rules between applications and say, hey, this application is only allowed to talk to this application. So when it goes out and looks for applications, it's only gonna find the one and it's gonna create an encrypted channel between those two. And because we have a service registry, a global service registry we can look things up in, now we're able to take where the IPs are dynamically. So I have my application, it's running in Kubernetes, it's running on spot instances, it's all over the place, the IP addresses are changing every hour. Am I gonna create a northbound load balancer ticket every single time that service moves around? No. So you wanna be able to do something like network infrastructure automation. Now you can do templating with console, so you can do a network infrastructure automation from a software perspective, right? So I can create templates for all my Apache or my Envoy API gateway or my Kong API gateway. I can automatically update those using console templating but I also have the ability to do things like Palo Alto or Juniper or any of the network devices that I already have in my physical data center. Since I know where all my services are, I can now inform the northbound resources for my load balancers through console and terraform together as a combination. So we call that console terraforms sake. Console manages where the IPs are or finds where the IPs are and informs terraform to update the infrastructure in the northbound load balancers if you have hardware load balancers. Which like I said, we've already got, you might have this greenfield environment but you've a giant brownfield environment where you still have physical machines and services that you need to manage. And then we have our user security. So users and identity. I'm a customer success rep and I need to access specific data in my database or I'm an admin and I wanna be able to SSH to servers. We can use a tool like Boundary. Boundary is our network or zero trust network access layer. It says, hey, if I log in with my boundary client I can see here's the list of services that I'm allowed to connect to. And then now I wanna connect to that resource. I still need a credential. In order for me to get that credential I can use vault as a joint conversation between vault and boundary. Boundary vault can automatically inject dynamic credentials into that path. So if I'm a, excuse me, customer success representative and I'm supposed to be managing the data of my marketing team or whatever or five let's do this a little bit more in government terms. I'm working at a health organization and I wanna manage only a specific set of users based on the state that I'm assigned to. I can go in the database and I can do a query but I can query only Virginia State healthcare records. I can tokenize and I can encrypt that data and I can give credentials only to that specific record or table in the database. And so Boundary will inject that user ID and password into the connection string. So I never actually have to see the credentials. I can also do something called short-lived credentials. So instead of having this, if I query vault I get the user ID and password I store it in my config file on my laptop. That's the problem, right? That's the MITRE attack vector. With dynamic credentials I can have those injected automatically and I can have them expire after 10 minutes. And if I only go in and check on that database once every month and a half the time between the time my last credentials expired which could have been an hour after I used it and a month and a half later those credentials didn't exist. So there's no way to get access to those credentials and then be able to move laterally through the environment and export trade data because the credentials don't exist and they're not there. So Boundary allows you to do short-lived credentials, dynamic credentials and give you a list of services that only you're allowed to see. So that's basically our replacement for a VPN. So I no longer have to get access to a VPN and then get access to the entire network. I'm only connecting through a reverse proxy connection to the list of services and then injecting those credentials. So this is our zero trust conversation. We have kind of two outside pillars that's the authentication and authorization. Vault managers are machine authentication and authorization through PKI. So if we're doing PKI for machines, if we're doing PKI for software bill of materials and we wanna actually put a timestamp code-signing certificate or on the right-hand side, I'm a human and I wanna access credentials, I first have to establish who I am as an identity. And then once I have that identity, I wanna now access those resources, whether I'm a machine connecting to another machine or NPE connecting to another NPE or human connecting to a machine. So the idea is least privilege, authenticate every single interaction, whether I'm logging in as a human or a machine or and also authorize every interaction. So are you allowed to see this resource? You might be able to have the credentials, but you're not allowed, or you might be able to log into Vault, but you might not be able to actually have allowed to have the credentials to detokenize or de-encrypt the data that's on disk. And this is the entire stack. So Vault, we are an identity broker. We do data encryption, dynamic and static secrets credential management. We have something, we have RPO and RTO because we can do a Vault cluster is made of three nodes. So basically you can lose up to two nodes and still have full access to all of your credentials. And we can do data replication across the different regions. And then we have operational governance. So again, across all of our enterprise stack we have policy as code. So instead of, so in Terraform you would apply policy as you're about to apply that infrastructure into the world. In Vault you use policy as code to say things like instead of does he need, so normally with Vault policies you say he has create, read, update, delete access to a policy path, to a path in Vault. To extend that use we can use policy as code to say, are they coming, are they establishing their identity using ping? We only accept ping or octa as our options for identity authorization. Are you coming from IP addresses that aren't on our white list for IP addresses? So we can add a bunch of additional, is it raining on a Friday, is it after five on a Friday? No, we don't want to give anybody credentials after five on a Friday. There's a bunch of reasons you can use policy as code to whether or not you want to give that access to somebody. Then we have console, federated service discovery. So examples of federated service discovery. I might have AWS region and Azure region and on-prem region and I want to be able to do service discovery between all these connected regions. I can do that using console so it all looks like one giant data center with a different domain and I can still connect and do least privilege access, connections between those different services. Simple service mesh, our network infrastructure automation which we talked about, access control with API gateway. Again, not only can we do east to west bound load balancers but we also have a console API gateway so we can do things like blue green deploys for our services and Kubernetes even outside of Kubernetes. And then with Boundary we have secure remote access, software defined perimeter. Now we're not connecting into the network, we're connecting to a network resource and then we can do session management and recording. And just kind of a note on the Boundary piece, Boundary Enterprise is now officially a thing but there's a Boundary open source, Boundary can be used open source but we have an enterprise and a cloud offering for Boundary just like we do with our console Vault and Terraform and Nomad. Okay, I'm gonna stop there. Actually, no, I'm gonna keep going. We're gonna start with Vault and then we're gonna do a little bit of the lab. Okay, so how does Vault work? So this is one of my favorite diagrams. Vault starts off as a client. There's three types of clients. There's the API, which Vault has all written around a REST API. So every interaction with Vault was started with a REST API and then we built a client that can basically communicate with that REST API. And then we have a UI. So you can log in through the UI, you can run API commands against Vault or you can just use the Vault commands. Vault log in, Vault read, Vault write. And then we have to authenticate. If we want access to a credential type, we have to authenticate and we can use any one of these different authentication methods. We call them authentication backends. So Active Directory. So I might have Azure Active Directory and I might use LDAP for my machines. So my machines can authenticate with LDAP but my users will authenticate using Active Directory. And then maybe I'm using EC2 instances or I'm using EKS. I might wanna use the IAM service role applied to that service or I might wanna use the instance profile applied to the instance instead of using AWS keys. Once I've authenticated, then that authentication is tied to what's called policy. That policy gives you create, read, update, delete and list access to a credential path. I'll explain what those paths are but basically it's the different types of credentials like slash secret is your static credential. Slash database is your dynamic database credential path. So I can read and write down those paths and I get different responses back from what I'm trying to read. So that policy is tied to the authentication step. You have a policy, a de-fedic policy that says I'm allowed to access database credentials. Once I've been granted authentication, I've been granted authorization, I'm now able to pass back that credential back to the client. So first step, vault login, that's the authentication step. Authorization happens in the back end. I then do a vault read and am I authorized to access that path? Yes or no. Send back the credential back to the user. The other thing that, every single interaction within vault is audited and logged and there's telemetry data around all of this. So if I go to any of our tools and I do a slash metrics, I should be able to get any kind of like Prometheus telemetry so I can just do a slash metrics and then load that in a Prometheus or any other Prometheus style metrics analyzer that should be able to read that type of information. So I can get logs, I can send that off to Datadog or I can send it off to Splunk or Elastic and Elkstack somewhere. And then we have data protection. Yeah, so data protection we'll talk about a little bit more. We've talked about a couple times, tokenization and encryption. So one of the guiding principles is identity brokering. So if you're looking for an identity broker, some way to manage all these different identity providers, vault is the answer. You know, like I said, it became a tier one service in my organization. So if vault was down, none of our applications worked. It was only down once and it was my fault. Typically it is when something happened in my last company because I was one person for a long time, which was good because then I was able to get more people. But when vault goes down, it is a tier one, you know, service like DNS. If DNS is down, you can't get to anything. If vault's down, you can't get to anything because all your credentials are stored in it. So it's important to make sure that it's, you know, you have an RPO, you have an RTO, you have what we ended up doing is having multiple vaults that we were replicating between. And then each vault cluster is actually a cluster. It's a three node cluster and it passes back the data using the RAF protocol. Guiding principle. We want to extend and integrate. You talk about different, you know, Terraform has all these different providers. We have over 2,900 providers. We want to do the same thing in the identity world. 100 integrations, 20 identity providers, a secrets engine. So the secret engines are your dynamic credentials and then all major cloud platforms. Meaning if I want to use Azure or AWS, I can use the identity provider in that cloud service provider as my authentication step. But then I want to be able to generate Cassandra credentials or Mongo credentials or my SQL server credential. I can do that with the secrets engines. And then we can also provision AWS and Azure cloud identities, right? So if I want to give somebody access to AWS, but I only want to do so for 20 minutes because they're checking on something. I can give them access with dynamic credentials for AWS. It gives them access for a short live time and then that expires after whatever your time to live is sent to. Again, API driven. So this is an example of a curl to get access to the secret slash config path in Vault. So I'm pointing to the Vault cluster and I'm saying please give me access to this credential. And then at the top there in the header you can actually see the Vault token. You'd have to pass that in. I typically, so the API is nice because it can actually integrate with, so like Golang, all the different major, all the major languages have libraries to interact with Vault. So we use Ruby a lot, we use Go, we use Java. They all have their own libraries to interact with Vault. And it uses the API on the back end. Again, we have auth methods and we have secret engines. Examples of auth methods are AWS, Azure, GitHub. We actually have a GitHub authentication, Octa, Kubernetes. Some of these are human driven authentication and some of them are machine driven authentication like Kubernetes. And then again, when we say something like a secret engine it is the ability to generate dynamic credentials for that secret. And you can see all the examples here. Actually this is, so in this case I'm authenticating my service with Kubernetes. I have a service account. So I tie that service account to a policy in Vault and now I've authenticated, now I need to access, this is Postgres, I can't remember what the elephant is. Yeah, I get to access my database, whatever database the elephant is, which is a dynamic credential generated for my database. I'm sure everybody online who knows what that is is screaming right now. Is it Postgres? Okay, anyway, your first secret. So this is how I started out. I started using Vault. I said, okay, oh I'm just gonna use generic Vault as generic as possible because I just wanna get my credentials out of a file and put it into a credential store somewhere that's highly encrypted that I have to authenticate to. The first thing I did was I basically said, I wanna put my database credentials, my static user ID and password that I generate and I rotate every 90 days. I'm gonna put it down a key value path just like I would use. It's basically an example of something like a Redis except for there's an authentication step to get to it. So it's just a key and a value. The nice thing about the Vault key value is there's versions so you can actually go back. So you change your password and then you forget what the old password is and you kinda hand crammed it. You can actually go back and revision history and read the old version of it and update all the files on that old version. So to enable your key value secret, you say Vault KV, actually the key value secret mount is actually there automatically. You don't actually have to enable it. But in this case we're saying Vault KV puts a key value put. You don't actually need the mount secret. If you just say secret slash devdv-api, this is one way to do it but the other way is to just put the path secret slash devdv-api and then the key equals the value. So the key can be foo, the value can be bar. And you get this value back by getting, so you've put it in there and now you wanna get it. So now I say Vault KV get mount equals secret and then you give it the path. Again, you could just say get secret slash devdv-api which we'll do today in our class, our lab. And then we can also get metadata so we can get what's the time to live on that credential. We can get things like when you created the password in the first place, you can get the different versions of that credential. So you can just get the metadata and then you can delete a secret just by saying KV delete. So we did put, so in this PDF, if you're joining from home, we have access to all these different secrets management resources. You should be able to click through the PDF and be able to access this. These go directly to the documentation so you can access yourself. Again, you'll be able to also do that today in our lab. Dynamic secrets, again, I've talked about it, but basically the idea behind a dynamic secret is, this is actually my favorite story because I learned, not because it was a good story, but I was at Hashicon for a couple years ago which is kind of funny, but I was working for my old company. I was coming back, I was in San Francisco, land in Chicago for a layover and I picked up my phone and I had like 50 pager duty events. Everybody in my team was screaming, hey, the certificate for our whole domain is down. And why is that? The certificate for the whole domain was down because it expired while I was in the air coming back from Hashicon. And why is that? Because I was only rotating that certificate once a year because that was the life cycle on that. And when you do something once a year versus you do it every single time you interact with it, A, you forget that you had to do it in the first place. B, you forget how to do it, right? So now every single one of my applications had to reload the certificate across my entire cluster and remembering which of those applications needed to access that certificate is a pain, right? And even if I'm rotating every 90 days, that database password, it's something that I have to remember to do once every 90 days, which is one of those things that I chalk up as, this is toil and I never wanna do it again. And I don't wanna make the mistake of coming back to a world where all of my applications are now not accessible because they expired while I was gone. Makes me never be allowed to go to a conference. So if we're actually doing dynamic secrets, this just happens every time we need to access a credential. So I often indicate my application against whatever identity provider, I go access my credential and then I'm done. In this case, we're actually creating the AWS secrets backend, so I'm actually going to issue people AWS credentials every time I wanna access the AWS console or I wanna actually interact with AWS. Save my application is an application that only needs access to S3 buckets. In this case, I would first enable my AWS secrets backend. I would then pass in my AWS access key and secret access key, just kind of a note here. In this case, we're passing in credentials a lot of times we don't wanna do that. If we're running this on a AWS service, I would do this through, I would actually apply the IAM role to the actual server so I would have to pass in these credentials and then I would tell it what region we're gonna actually access these credentials for and then when I write to this path, the AWS slash role slash my role, actually this is us creating the role for that policy. So we're saying this user, IAM user is gonna use this policy, in this case, this user is only gonna allow to create EC2 instances. But if this was S3 star, just like I was saying before, that application would be able to add right to an S3 bucket or read the S3 bucket or maybe even delete that S3 bucket. Maybe it just goes and generates short-lived S3 buckets. Well, I wanna make sure I have an IAM user tied to only doing S3 things. And then every single time I wanted to interact with that, I would ask Vault first authenticate. It would give me new dynamic credentials that lasted 10 minutes, 30 minutes, whatever. And it would expire automatically. And this is me reading that role, right? So now I have, I do a read and now I have my secret access key or an access key ID here or access key ID and then secret access. And you can see the lease duration here. So this is 768 hours, I can make that 10 minutes if I want. Like I wanna give someone 10 minutes access to AWS console so they can go read something on S3 bucket. I can do that. Again, we have some resources here, you can go check out, those are all clickable in the PDF. Okay, Kubernetes secrets is one of my favorites because we're starting to see a lot more adoption of Kubernetes. So we use, in this case, Vault is brokering the identity based on the service account. So the Kubernetes service account is, in this case, we'll pretend we're running a Kafka cluster. Say Kafka needs access, or say ZooKeeper needs access to Kafka. So ZooKeeper, I'd be at a ZooKeeper namespace. And I need access to Kafka. For some reason I have credentials in Kafka so I'm gonna have to ask Vault for those credentials. Well I know that the service that I have running in that namespace based on the service account is a specific identity and I can use that identity to establish or to broker the credentials on the back end. So I might, on the back end, need access to, again, those Kafka credentials which are down some static path somewhere. I can say, hey, Vault, give me those credentials. Vault will give the credentials back based on the policy, based on the fact that I've authenticated against my service account. I've authorized against the policy that says, yes, I'm allowed to access this path and now it'll hand it back. Now, what you see here with the Vault sidecar, we have what's called a Vault agent sidecar that actually sits alongside all of your applications that can manage the life cycle of that credential. If it's a PKI certificate, when the service spins up, it'll apply a PKI certificate automatically. If it's a credential that needs to be injected into a config file, it'll automatically inject that into the config file parameter. And if it's about to expire, the Vault agent, the sidecar, will actually restart that application for you. So you kind of programmatically say, when, if this needs to be restarted when I update the credentials, just restart the application and then it'll spin it up somewhere else in the environment. This can happen in Kubernetes, but that sidecar can be run on a static VM and we're gonna do that today with a Vault agent to kind of show what that looks like. In today's example with Vault and console, I tried to make it as simple of an environment to get an understanding of what the applications can do. So when we go through the Vault, we're not using Kubernetes and Vault sidecars and annotations or any of that stuff, we're just gonna do a Vault agent binary that's basically pointing at the config file. Here's an example of us injecting. So in our Helm chart, we have server dev enabled set to true, and then we're just saying, in this case, we're gonna set our Vault address to the Vault server and then we can give it a path to where we wanna actually get that credential from. And you can see that in this case, the secret is at path secret hello world, the rule, which is the Vault role that's applied from the service identity is my app. And then we can basically take that data and inject it into the lifecycle of the config file that needs those credentials. So in this case, it looks like we have a string here. Our data is a map of the password, which is foobar-baz, and the username, which is foobar-user. I would get that back from that secret hello world and I would inject that into the connection string. Some examples, we can, you know, how to use Vault and Kubernetes use cases, how to use it with the Helm chart, and then how to use this agent sidecar injector. Again, one more time on database credential rotation. I have a web application and actually we're gonna do this here in a second. We're gonna have a web application that's gonna be called DataView. It's basically a Go app that runs a web server. And when you query, when you do a curl on that web server, you're gonna get back a JSON blob of a user with all of its privacy information that's pulled from the database. When we first set it up, we're gonna set up with static credentials and then we're gonna turn on Vault and we're gonna set up the AWS rule and then we're going to set up a secrets engine for Postgres, the elephant probably, and actually get dynamic credentials injected into the config file and then it's hands off from that point on for database credentials. And to start that, we do Vault secrets enable database. You can see here the connection string for that Postgres user. We're using the Postgres database plugin and we're giving it a connection string, username and password. So those brackets there just mean this is a variable. When you query that endpoint, it's gonna give you back a dynamic username and password and then we apply the allowed roles and then username and password. And then when we Vault read that, we get a new username and password. It's dynamically generated. This has a least duration of one hour. So what we're gonna see today is that there's a couple of ways you could do this. You could say, I have a startup script. This starts my application. I could Vault log in to authenticate, Vault read that path and inject it into some sort of template. But we can also use the Vault agent and the Vault agent kind of handles that all for you. You give it the authentication method. You give it the credentials to get it. Use that authentication method. You give it the path those credentials are on. You give it the config file that you wanna update, the template file that you wanna update and then you give it the least duration. And then you also give it what happens when this credential expires. So you can restart the application when this expires and then inject the new application credentials. And those resources you can find here as well. Last but not least, one of my favorites, I spent a lot of time with the Air Force PKI Spoe and it's fun. And we're doing things like talking about NPE PKI. So building your own certificate authority for service to service communication, building code signing certificates for get sign or co-sign different applications that we wanna use. And we need to have a CA so we can generate certificate that's tied to a commit or generate a certificate that's tied to a service that's running. It's not necessarily a user. It's a NPE or a non-person entity actually requesting that certificate. Every time I request, I can say, here's who I am first. Here's the path to those credentials. If you are who you say you are, I'm gonna give you a license or your certificate that goes alongside your application. We first do a vault secrets enable PKI. This is our secrets engine. And then when we write to that path, in this case, we have a time to live of 24 hours, we get our certificate bundle. So from the CA, we can actually, at the vault level, we can be a root level certificate. We can generate an intermediate certificate from a higher authority. So say we were using, in my example, the Air Force PKI. And my sub to that would be an intermediate certificate that might have the Air Force PKI or the DOD PKI as the root. I can authenticate on there, they give us a PKI certificate. And then all of my applications and all my code signing can be done through NPE communication by using vault as the identity broker for our environment and then requesting short-lived credentials. So instead of having unencrypted traffic between all of our applications, now we can have a CA that manages the lifecycle of that certificate and can do so automatically. Like I said, certificate management is a real pain, especially if it only happens once a year or once every 90 days. It's much better if you just work it into an automated workflow that you don't have to think about anymore. Again, resources are here. And the last thing I've touched on a couple of times, data encryption and tokenization. In this case, we need to be able to encrypt the data in transit or we need to be able to encrypt the data at rest. I can do so by generating a secrets engine. I can say here's my plain text and my plain text is a credit card number. And in this case, I'm doing base 64 as well as encrypting it. And then it shows you what your Cypher text is afterwards. Then I can take that Cypher text and decrypt it and say I want to go to plain text and then I also want to decode that plain text. So that's two steps. There's the decryption that Vault does and then just the base 64 decode that anybody can do. Just kind of a note, if you do have a base 64 encoded file, anyone in the world can decrypt it. It's not encryption, it's actually encoding. Well, technically it could be encryption that everybody has the key to. So anyway, that's data encryption. And one more, that's it actually. So we're gonna go back to the Zero Trust lab and I'm gonna start with the first few examples here. Okay. I'm gonna actually check the Slack channel anyways, see if there's anybody talking in there. Nope. Okay, so Zero Trust principles, go to the deck. Here we go, that's what I was looking for. Okay, we went through the Terraform one, we're gonna do now the Zero Trust application security. All right, so the first thing we're gonna do is we're gonna create an application. We're going to set up a vault, so first thing, build a vault server, build our application server, create data and then load that data into the database and then start our web server so that we have to interact with our database and we're gonna just use static credentials and we're gonna show you what that workflow looks like. Again, I put my pretty picture here for the same triangle in a different color, but the idea is I have my multiple clients. I authenticate first, I'm authorized through my policy and then I'm given access to a set of secrets based on my policy and those are issued back to the client whether the client is a web front end, the Vault CLI or the Vault UI or the API. This takes a little bit because I'm actually spinning up a few containers and if we have 350 people also spinning up containers, that's gonna be fun. We'll see how instruct can handle how many people can do this. We're waking the hamsters, that's what it's doing at this point. Do you have any questions while we're waiting on this about Vault and use cases? We hit some of the big ones, but of the user, yeah. There's actually a rotation utility that manages that credential, that's a good question. So say I do have a 90-day, I can set it to every 30 days or every 10 days, I can rotate that admin user automatically as well. Yeah, it's one of those chicken and egg things. Wait, there's this guy, this service that's just managing these brokers and can do whatever he wants. Yes, that can be automatically rotated as well. The same thing goes for PKI CA, right? I still have a CA that I have to manage. That can be automatically rotated and then every leave certificate, we can actually have two. So say you have a 90-day rotation on intermediate certificates, at least. If you're not doing one-day rotation or 10-day rotation, you're doing 90-day rotation. I would spin up a secondary intermediate and this can all happen automatically and then all new leaf certificates can be issued from the second one and then all the old ones would expire off and then move it over to the new intermediate certificate. So there's a rotation aspect as well for the PKI. Okay, so Vault. So we want to kind of play around with Vault in general. We have the, this is remember I talked about before the three different types of clients. We have the Vault CLI, Vault UI and the Vault API. In this case, we're using Vault 1.13, which I'm pretty sure is the latest or pretty close to the latest because I just built this. We're gonna start the Vault service. Actually, let's go over here and look at the Vault UI. So this is what the Vault UI looks like. We have to log in with the token. I'm pretty sure I set the token to be root. So yeah, I did. So very simple token. So, okay, just kind of a little background on a Vault cluster. You spin up a Vault cluster, you typically spin out three of them and you'll say boot, strap, expect that there's gonna be three nodes. Well, if you're running this in AWS or you're running this in Azure, there's a way that you can set tags on the three different servers that are in that cluster is say like vaults type equals vault server. And then those vault servers would know that to look because we've set the configuration file to go look for those tags and to automatically know that this is a Vault server, right? And it would join the Vault cluster. And so I could have an environment, which I did. I set up an environment where my Vault and my console clusters were all on spot. So if you haven't used spot instances, they're way cheaper. So they're 70 to 80% cheaper to use, but they're not stable at all, right? So I was losing Vault servers and console servers all the time, but I wanted to inject chaos as a kind of mode of opera. Like I would always have chaos going on on my Vault cluster, but it didn't matter. I'd lose a Vault server. A new one would come up and run all the configuration management. It would come up, it would go, hey, AWS, am I really a Vault server? Yes, you are. And then it would join the Vault cluster automatically. When that, if that, so the identity brokering piece of that would say, yes, Vault, you are who you say you are, based on the fact that you have this Vault role applied to it. And then it would get the key that's in the key management system and then automatically unencrypt the Vault. So when a Vault server spins up and it uses the configuration file, it has access to the data that's local, but it's all encrypted. So I have to pass these unsealed keys to open up the Vault, right? Like it's the decryption mechanism for unsealing the Vault. That we can use what's called a automatic configuration unseal, automatic unsealing at whatever configuration. But if you're using like an HSM or you're using AWS or Azure, you can store those keys and provide access to those keys and automatically unseal your Vault cluster. So then you can add that little level of chaos where you just kill a server, a new one comes up and now you have a Vault cluster that's fully running. Okay, so Vault CLI, we're gonna, it looks like we already started with the Vault service. So if I do a system, CTL status, Vault, we're running. And I'm just gonna do a quick Vault. So these are two variables that are important with Vault, Vault Adder and Vault Token. Vault Token is less important if you are in the future are using AWS or if you're using some sort of authentication method. If you don't set up an authentication method, your core source of authentication is through a Vault Token. There's a default token that's created when you first start a Vault cluster. And in this case, I just set it to root because this is a dev environment. I started this in development mode. To start a Vault in development mode, you just say Vault server dash dev, right? Very simple, it spins it up and unseals it for you. It does all the fancy stuff just so you can play around with the credentials, but you never use it in production. We're doing this for the lab, so it's no big deal. These two, there's another Vault variable that's important. In this case, the Vault address is Vault server 8200 and the Vault Token is root. The third one would be Vault namespace. So if you're using an enterprise Vault, you can do things like multi-tenant Vault, right? So I might offer a service to all my customers and each customer would get their own tenant within that Vault service, right? So I would say Vault namespace equals open GovCon. That's the third thing you'd wanna add. Okay, so I wanna do a Vault secrets list. These are the list of secrets engines. We keep talking about the different authentication methods versus secrets engines. This is the list of secrets engines. There's a bunch of them that get created automatically. The sys one, this one is basically a default standard. All of the stuff that happens behind the scenes happens on this path. You don't touch it. It's really just for control, policy, and debugging. The secret KV path, this is literally static credential management. I can store my credentials, I can have them expire at a certain time and they can be static credentials. This is where I kind of started, right? The secrets engine. And then I started moving into using dynamic credentials. And then Cubby Hole is like, almost like LastPass for Vault, so you could store something on a Cubby Hole that's gonna expire, somebody can go grab it and give access to a specific user in Vault. But we're gonna just explore putting something like a key and a value on a path and on the secret info path, right? So I'm gonna first set my variables. We'll say user equals the fed password or age equals way too old. 45 and then Vault KV. So then we're gonna actually write that to a path, right? So I'm gonna say Vault KV put secret, this is the path, secret info. And then I'm gonna say name equals, since I don't wanna type all that. I'm gonna copy and paste and then age and then close the bracket and then close the quote. So now I've written to the secret path on the secret info. The data is in there, which is interesting. If I wanna go get that path, I can say Vault read, Vault KV get that info. All right, so this is the metadata I was talking about before. So you can actually query just the metadata or I can query the data that's actually in that path. So I can say, give me this information, right? So secret data info and you can see age is set to 45, name is set to the fed. Now, there's also, if you're into, if there's a reason you wanna get this in a different format, you can say dash format equals JSON, right? So maybe I wanna get it in JSON. I have to put it in a different path. I'm gonna get it here. I know it's somewhere KV get, yep, format equals JSON, secret info. There we go. And then I can do something like JQ data, data age, right? So now I can get 45, right? So I can just get the specific thing that I'm looking for. There's a couple of different format types. If you're writing bash scripts, using JQ is very easy to get through JSON, parsing JSON. So that's another way you can do it. But I can also go into the UI now that I've logged in and I can go back to the secret, go into the info directory and I can actually, I have my age and my name. I can say show that or show that or I could just copy it into my buffer, right? So that's just a normal static credential store. It's, you know, it can be Phips 140, Phips 140 to level two and we're coming out with another binary level three. We can store credentials in the HSM. It's Phips 140 as well. Using Vault as kind of the intermediary. So that's Vault, the Vault CLI. So this is just us starting the Vault cluster. All right, so we're gonna check to make sure that's kosher. Again, we saw the Vault token. The Vault token is set to root. Okay, so in this example, Open GovCo, which is this amazing company, has a Postgres database with a list of their users. The following variables will be helpful when trying to interact with the database. So we're gonna have host name, PG ports, the Postgres port, Postgres user, Postgres password, database variable. What I'm gonna do is I'm gonna take this tempusers.json file, which is 50 records of a bunch of random users with a bunch of random personal identifiable information. And then we're gonna use the data load command to load it into the database. So first thing we're gonna do is we're just gonna interact with the database just to show that we can, right? So I'm in the database. It's an empty database. Do a list, you can see the databases. And then I'm gonna connect to, I actually have a users, before this started up, it actually built a users table. So I'm gonna connect to that users table. And then I'm gonna list. Sorry, the database. Right, so there it is. There's the users database. And then I'm gonna do quit. All right, so now that we know we have a users database, we can connect to it locally using the Postgres user. Now I'm gonna start loading data into it. So first I'm gonna do is I'm gonna cat my file. So I've got this big file. It's got 50 different users and they're all random, right? But I've been told that I really need to get these into our CRM database as soon as possible and I need to deal with the dynamic credentials. And I want some sort of application that interacts with this data. So first thing I'm gonna do is I'm gonna load this data into the database. Oh, first I'm gonna check to make sure it's empty. So I'm gonna run this psql command. So I'm gonna use the existing user, the existing password database in the port and I'm gonna do a select star from users. This should give me empty, right? There's nothing in here. Users do not exist. And I'm gonna use my data load command which will use those connection string. I'm gonna pass in the user ID in password is literally Postgres Postgres. So it's not very difficult. And I'm gonna load this users.json file in there. So now data loaded successfully. I should now be able to query that database. So I'm saying connect using psql to the user's database and select name, email and social security credit card from users. So I'm just gonna copy that little command and I should get a list of users that are in that database. So now it's nicely formatted. It's in a database. It's formatted nicely. I can now do things like query, which I just did. I just got four columns out of the, I think there's like 10, two, three, four. There's like 12 different columns I can get out of here. So now we've loaded data into the database. We have a full database we have, but we're using the credentials user ID or the user ID is Postgres and the passwords Postgres which nobody likes. But for right now, we're just going to start up our data view. I think it's called data view, not data viewer, I'll have to go back and fix that. But we're gonna use the data view web server that we're gonna start and then we're gonna be able to do a curl on that web server and then get back out just a random user, an element from that user database. So I'm gonna do a curl and every single time I run that curl, it's gonna give me a new example. So I'm gonna start my, and so I'm gonna do system status data view. Okay, it's dead. So we're gonna start it. Run that again and now it's running. Okay, and then now that it's running, the config file I'll show you in a minute. Now I can just do a curl on app server which is the name of this server on port eight, eight, eight. And I get a random user. I can keep running that until the cows come home and I get different users every time I query. So that's my web server. It's real, real fancy. And then, so just real quick, my data view, the config file for this web server is found here. Whoa, cat, cat. It's very simple. User name is Postgres, password is Postgres, Postport, database name and web port. Those are the different elements of my configuration file for the data view service. Now this, I have to figure out how to rotate this password automatically, right? I don't wanna let it sit as Postgres and Postgres. So we're just gonna check, get through the next one. Okay, so this is the best part, well, one of the best parts, I think. Now we're gonna enable vault dynamic database credentials for Postgres. So it's a little bit longer of a process. We kinda saw what that looked like over here. So the last challenge, we started with the data view web server. Now that we have it, the web server is located at data view. So user local bin data view and the config file is at this data view.yaml file. Most organizations have to change every 90 days. So instead, we're gonna enable the vault dynamic database credential and we're gonna use a different config file. So we're gonna first, since we already authenticated against all, first we're gonna enable the secrets engine for database. So this secrets engine, this database secrets engine has the ability based on the plugin to interact with almost any database out there. I mean, everything from Snowflake to Mongo to Redis to MySQL to Cassandra. Like if you have a database for the most part, it'll work. You just have to use that version of the credential of secrets engine. In this case, we're going to configure with the Postgres QL database. This is probably the elephant. And we're gonna write to this database using this string. So you see the connection URL. It's using the username. It's going to, every time you query, it's gonna use this connection string of Postgres, new username, new password. At app server 5, 4, 3, 2, 5, 4, 3, 2 is the port. And then it's gonna connect to the user's database. So you're actually setting up a connection string for the specific data view role and vault. So I set that up. So now I've created the role. We can tie that role to specific policy within a creation statement. So if I have this data view role, the database name is user, I can actually tie the creation statement. So whenever you create a user in a database, you usually tell that user what roles you want to give it in the database. This is nice. So again, I go back to my customer success example. I had a customer success reps and they were only allowed to connect to, we'll call it American Eagle as their retail brand. They were managing. We didn't wanna give them access to other retail brands that they weren't allowed to access. So how did we do that? We gave them specific dynamic credentials to a database or to a table in a database. You can create grant roles to a record in a database if you want to. So in this case, we're gonna say grant select on all tables in the schema public to name, but we're basically giving access to the user's database. And then here's the database name. So we're saying in that, we're gonna create a role in the database that's tied to the user that we're gonna generate. So we're going to copy that into here. Okay, so now let's test that role. So we can actually do a vault read. So before we did a vault read on the KV or the secret slash info path. In this case, we're gonna do a vault read on the database roles, data view path. So let's do vault read, database roles, data view. And now you can see, oh, this is the actual role itself, I'm sorry. This is the role. And so you can actually look at the role. You can see what your default TTL is. If you wanted to modify that, you can go back to the previous statement and change the default role. This, for example, if you're doing this for something like a PKI certificate and you wanna manage the time to live for LEAP certificates, you could do that through this role. In this case, we're saying there's a default role of one hour. So if you need access to the database, you have an hour to do it. If you have some users that might need three hours because they need to exfiltrate a lot of data, like some bad actor, you wanna give them access to something, you wanna do it for three hours versus one hour. That was a joke, by the way. And we're gonna do, now we wanna actually, on the command, instead of actually reading the role, we're gonna actually read the creds on that path. Okay, so now you can see we have a username. The username is V-tokenDataView, blah, blah, blah, and then the password, right? So this is, and every time I query this, it generates a new user ID and password in the database with an expiration token on it. You can go in and do a vault token revoke on a specific token ID. So I can look at that username and I can take that token ID and I can expire or revoke that, that I've handed off to a user. And if you have an application, you only wanna give it short-lived credentials and they're logging in all the time, you might wanna give it like one hour, right? And then I'll just rotate those credentials. But a lot of times, there's really no need to rotate the credentials that often, you rotate them once every few days, 10 days, 30 days, whatever you need to do. So this case, we're just gonna set the correct, set those username. Well, first we're gonna actually get the value. So in this variable, the creds variable, I'm saying vault read in that format JSON. So I'm getting it in JSON format and then I'll put it to this data view. Or I'm gonna get the creds from this path. So if I set that, I can echo dollar sign creds. And now I can see the JSON version of that path. Right, so now there's my user ID and password and it changes every time I query it. Okay, but now I just, instead, I'm gonna take that PG user and PG password and just grab these two bits of information out of that data. So this is something I would do, if I was actually running this in a Bash script, I would set the credential, I would grab the key and then I would wanna set the user ID and password in that, right? So I'd grab the entirety of that JSON and then use the JQ command to actually set PG user and PG password. So this is an example of how you could do that if you're running this from the command line. So echo dollar sign PG user, echo dollar sign PG password. Right, so there you go. Oh, which it gives you right here, I could copy that. Okay, so then now I can use those credentials, PG user and PG password and query, did I put password in there? Hopefully this works. Oh, okay, just see the reason why I don't have the pass in password here is because the actual PG password variable is an auto, if Postgres sees that there's a PG password, it'll actually automatically ingest it. So I only had to, thank you. I would've sat here for a while. There you go. So now run my psql command, I can get just the last name or just the entire name of the first five entries in the database. So this is just using psql and I've used the PG user and PG password, which are automatic and I pulled from vault as a dynamic credential. And then in an hour, those will expire. One thing I didn't write into this was the ability to revoke tokens, but we can show that vault. I'm gonna have to go back to that one. I'll do it once while I have it built out. I can't remember the exact commands, but there's a way to list all the users and then revoke one and then check the metadata on those. Setting up, okay, so we have the secrets engine built and we're interacting with vault based on the root token, but now we wanna add in an auth engine that allows us to get access to that credential. So we're gonna use the AWS off backend. With instruct, it gives us the ability to use AWS keys. So we're gonna use that AWS access key and secret access key ID as the authentication method to access those credentials instead of using the root token, which we wanna do away with. We wanna use the policy that only gives us enough information to get access to the credentials that we need and not full blown root access to the world. Okay, so the first thing we wanna do is enable the AWS off backend and vault. This will allow us to authenticate and authorize using AWS IAM. So before we enable the secrets engine, now we're often, we're enabling the auth backend. So we're gonna first say enable AWS and you can see the description here. I'm saying the description is this. Success by auth at the AWS path. Next we need to configure the AWS backend. So I can actually configure that and I can use say secret key equals this, access key equals this. Now, because of the environment I'm in and I enabled AWS in this lab, I got free tokens, free access key and key it, access key and secret access key. So if I do ENV and I grep for AWS, they're already in here. You can see I get these for free. Now, normally you wouldn't get these for free. You'd have to, based on the identity of the server or based on the service, maybe I was running an ECS, I could use a service identity tied to an actual policy in my account ID. All those things could be generated dynamically. I would have to actually set them as environment variables but in this case, since that's what I have, I wanna show kind of the use case here. Okay, so we have these keys here. So I'm going to configure the AWS role. So I should first do a vault off list. So before, I should have done it before, but before I had the token role, so that was the root token that I passed in. Now I have the AWS role. So I technically should be able to, the Vault UI should be able to see AWS in here. I don't see AWS, I have to refresh. Yeah, I can't remember why that's not working. We'll figure that out later. But AWS tokens listed there, we have Vault off list, showed us that AWS is available. Let's see, oh, if we go to the off methods, so if we just go in, use the root token, access, off methods. So now we have the AWS off method here. And you can see the configuration for that off method. So in the shell window, create a policy for the data view role. So now we've, we talked about the three steps, right? Client talks to an authentication method, gives you access to a secrets engine, it gives you a credential and it passes back to the client. That policy step, that authorization step, we need to be able to create a policy file and tie it to the AWS role. So in this case, we're creating a data view policy. So that data view policy should give you access. So the nice thing with Vault policies is you say, okay, path, and here's the path. So I'm gonna give it dynamic credentials, I'm gonna give it access to the off path to renew self. Okay, so I can do a Vault renew on my own token in this case. I could do a lookup on the leases. So if I don't have that, I can't even check to see what my credentials expire. Off token accessors. I wanna be able to do a Vault read on a token. And if I can do a Vault read on a token or a Vault lookup on a token, I can actually see what policies and what paths are applied to that token. So the nice part about that is if you, so I used to work with a bunch of developers and they were always asking for additional credentials and they would, for whatever reason, fail in trying to access a Vault path. And I'd say, well, can you do a Vault policy read or a Vault token lookup and tell me what policies are applied to that token? Well, the policy, when you do the Vault policy read, it would list out all these paths right here. So I'd say, well, you're trying to get to the secret database directory and really the database passwords and database Cres data view, right? So that was one of those things that that's why I enabled this off token accessors lookup, the ability to read that so that they can do a list and see what paths they have access to. It just helped with troubleshooting. In this case, we're giving access to one real path, which is the database creds, data view path. So now this user will be able to access this, okay? So all I did here, I didn't do anything in Vault. All I did was create this HCL file, right? So I just created the policy and now I want to apply that policy. Remember, this is the Vault. So now I'm creating a Vault policy called data view and I'm using this specific path, this HCL file. Now, another thing as far as troubleshooting is concerned or as far as production environments are concerned, instead of doing it this way, I started to manage all of my Vault policies and all my credentials via Terraform. So Terraform, I can use the Vault provider and I can apply policies based on Terraform. So if there are changes to somebody's Vault policy, it doesn't happen outside of my version control system. So say I have a developer and they want access to a new path, there's an approval step that goes through that, right? Somebody has to approve that, a secondary person. So I would typically do all of the Vault workflows through Terraform, but in this case, I'm just showing you how to use it using the Vault binary. And then we're gonna write this new policy. You can see the auth type is IAM. The policy is data view. We have a max lease of one hour and we're binding this to a specific IAM principle using those credentials. So I pass in my account ID. So now I'm using the auth method for AWS instead of my root token, basically. So now I'm gonna say Vault login. Method is AWS in this case. Before, if I did a Vault login, it would just, you can say method equals token and then you'd pass in the root token. And then the role is gonna be data view, which we just created, and then the access key and secret access key ID. So I'm logging into Vault, this is my authentication step. So now I'm logged in, I have, I'm actually given a token back, which is basically the interaction between you and Vault cluster. You can see there's a lease token duration there. And now I wanna be able to read this policy again. And remember what I said before, I wanna be able to, now that I am logged in not with the root token, but my AWS token, I can now still do that Vault policy read that I was talking about. If I'm working with the developers and they're having access to a path issue, I can then say, okay, well, do a Vault policy read and tell me what you have. And then now I'm gonna actually read those credentials. So now, because I was given access to this path, read and list, I can do a Vault read on that specific path and get a new user ID and password. So that's setting up the AWS authentication method and the secrets engine. Now, if I were doing this programmatically, and if I were doing this in a world where I wanna have very similar outcomes every single time I do it, I'd probably be using Kubernetes, I'd probably be using Terraform as my form of deployment of all my policies and deployment of all my, or the writing of those tokens. Like everything would be done through Terraform programmatically, I'd have to get approvals. What I wanna show is kind of like the bare bones examples of how to do this and like how you interact. So we can kind of think about that triangle of client authentication secret back to the user or the existing user. Whether that user is a machine or a service or a user. So then we do check and it should say, hopefully that we're well done. Okay, so I think the next one starts off with console. Oh, okay, we have one more step, the Vault agent. So now I don't have those static credentials anymore. Now, how do I manage the lifecycle of those dynamic credentials? We have a one hour lifecycle on that credential. I wanna automatically rotate that credential. If I were running this in Kubernetes or some other orchestration tool, I would run a side car called the Vault agent that would automatically rotate those credentials for me and inject those into the path. We're gonna do this now using just like a Vault agent running from the command line. We're gonna do a service Vault agent start. And we're gonna take a look at the configuration file used to manage those credentials. Again, this takes place of the person who's actually going out once a year and rotating those credentials for you and reaching out to your certificate management, your CA. So we're gonna install Vault agent. We're gonna configure that Vault agent and we're gonna start it up and we're gonna see that we've injected those into the application. Okay, so if you remember, there's a data view.yaml file. It's in cat etsy data view.yaml. This was our web server and this is how it interacted with our database. User name and password was Postgres, right? We wanna change that. So we're gonna switch that over to the data view-dynamic credential file. That doesn't exist yet. We have to start the Vault agent to generate that dynamic file. In this app code tab, we have examples of the following. So we have a template. This is actually what Vault is gonna read and then generate a file for us. So you can see at the top here, we have with secret and then we have that path that we created at the secrets engine. So with this secret, we're gonna inject the username and the password into this template. And out of this, we're going to get a data view-dynamic.yaml file. Oh, how do we know that? Well, first we have, this is the Vault agent configuration file. So auto auth, we're saying use AWS, the authentication method. We're gonna use the role data view which we set up earlier. And then we're going to, there's a token involved here and that's what this is. It's just saying when you manage the token and the lifestyle cycle of that token, it's gonna be here in this path. The Vault server that we're actually gonna interact with where these credentials are stored. The source template is this data view-dynamic which we just took a look at which has that like go style templating. And then the data view-dynamic.yaml file which is the output, right? And then once I read that, what do I wanna do? I'm gonna restart on change in this case, right? So this is a service script. So it's using the service plugin. I would just do a system CTL restart on the data view service. So this is the configuration for Vault agent and then this is the template that that Vault agent is going to create a file from. Does that make sense? Any questions about that? So that's kind of a lot of pieces and parts. Yep, the TTL, yep. Yeah, so Vault agent, exactly. So on startup, so we're gonna start the Vault agent first and then we're gonna restart our data view web server with the new configuration file. Then after that, that data view service will restart. Yeah, so we're saying when this, check the time to live on that token and when that token expires or before, right before it expires, restart the data view service and then create a new data view-dynamic.yaml which we'll be looking for because our service script points to it, right? So, exactly. Yeah, so what he asked, this is very important. So from a developer app dev perspective, I don't actually modify my actual application. All I'm doing is passing in new user ID and password. So there's no use of, so I don't have to modify my application at all. I could if I wanted to, right? So I can actually have the application use the Vault library for Go or for Ruby or for Python to interact with Vault and then restart itself or just update the configuration file. But I don't have to. I can use this Vault agent as a sidecar to that service that manages it for it. So if in a lot of environments, we just don't have the ability to write Vault into the library and make application changes. We wanna do so from the outside in, right? That's a great question. So we just took a look at that. We're gonna do a Vault read on that data view. You can see we get random user ID and password. We're gonna quickly look at both the data view service. So this is how the data view service is started. It's a Go binary, you pass in that config file, which instead of passing in etsy slash data view.yaml, now we're gonna use the data view dynamic file, which we're gonna generate. And then the Vault service is literally just Vault agent. We passed that configuration file we just took a look at and then that's it. We just have some log levels on there. So first thing we're gonna do is we're gonna start the Vault agent, right? So this Vault agent should go out. Let's just make sure it's still started. System, CTL status, Vault agent. Okay, we have it running. So technically we should have our YAML file that was created. So we're back over here and now that template generated, this template generated this YAML file. Now we have a new username and password and we can restart. So now we're going to, we do the status. So we're gonna actually, you can see the difference here, cat, etsy, data view.yaml versus the other one, right? So now, yeah, this one. That's the template and that's the file that was generated, right? So instead of using Postgres Postgres now, we're actually using these dynamic credentials. And again, to go back to Camden's previous question, which was, well, how did we manage the manager in this case? So there's a way to rotate Vault, the Vault root admin that manages the database credentials that are in the database and that can happen automatically as well. I didn't put that in this, but there is a way to do that. So do we start the database? Yeah, we did not start the data view script. So we're gonna start data view which worked off of this service script, which points at the new data view dynamic YAML. And now we should be able to query the data view web server just like we did before and get a different user every time we query it. And that's it. So now we are authenticating with AWS. We're generating a dynamic credential and we're interacting. And now the Vault agent is managing the lifecycle of that. So now when I go to Hashicon next year, I don't have to worry about my certificates expire or my database credentials expiring on me for whatever reason. I think the next one's console and I wanna pause there. Yes, okay, the next one's console. So I'm gonna quickly go over console and the features of console and then we'll do the last couple. I think we have 30 minutes so that's perfect amount of time to go through what console does for us in this case, play with console a little bit and then it'll be, we'll be done. So, okay. So for now, yes, okay, perfect. So we have Hashicorps console. Console is a, so it starts off as a tool for service discovery. I actually started using, believe it or not, console first. I started using it in 2014, which was very early on in the world of Hashicorps tools. Vagrant was the first tool and I think console came next. Vault was out at that point, but I started using it because I was in the data center and I wanted to have something that could manage services from the data center and my AWS cloud. So I literally was using it for the use case that we talk about here, the multi-cloud world where I want one common tool for two different places, one workflow for those two different places. So I started off way over on the left here with service discovery and health monitoring. I actually wrote a script that monitored the health of all my services using service discovery and using console and then once something went unhealthy, you would page out to page or duty. That wasn't the best monitoring tool at the end of the day. I ended up switching over to Prometheus, but it worked for me as a one person shop at the time. So it does monitoring. So the other thing is when you query something in console, you're actually can query it via an HTTP API. You can query it via DNS, right? So I can just replace my normal DNS with console DNS. And now I can query all my services and I'm only getting a response back from my healthy services. So if I want to know where all my IP addresses are, I'm only getting the healthy IP addresses back from that DNS query. We have a secure service mesh, which means basically, by all my services to communicate with each other, I want least privilege. I want to authenticate saying you are the identity of the application. You say you are, and then I want to create least privilege and encryption between those applications. Automation, again, network infrastructure automation, console terraform sync. If I know my service discovery, I know where all my applications are, I know how healthy they are. Now I can update my northbound application resources, whether that's a firewall that's virtual, like a AT Proxy or Apache or a Kong, or my physical infrastructure like my F5 or my Palo Alto. And in the last two years, we added an API gateway. It used to be that we were only kind of at that application layer, but the layer four layer at the service mesh, but we didn't actually have an API gateway where we could do kind of higher level abstractions and blue-green deploys and path routing and all that kind of stuff. So now with console, we can inject ourselves in the endpoint, the northbound resource, it's the API gateway, and all of our east to west application traffic. Oh, it's gonna make me do. Okay, cool. Central source of truth to track services, instead of saying, hey, somebody else needs to manage my services for us, when the application is deployed, at deployment time, it registers itself. And at that point, we can say, here's my health check. You know, my health check is at slash health on this port, right? So I can say my web server health check is on slash health on port 888. And if it's not responding there, then it's down. You know, I can also do that with a TCP port. There's a bunch of ways you can do the health check. And then once the health check is positive, it registers in the service registry as a official service that you can query. In this case, you can see a console applies service identity to any service registered to it. So whether you're running an AWS, you're in Kubernetes, whatever, wherever you're running your services, on-prem, in the cloud, on Kubernetes orchestration, we have a common way to do identity provisioning for our NPE. And now we can establish the identity, issue certificates. So this is interesting about console. Console can also act as a CA without Vault. But if you already have Vault and you have a root of trust, you've done your ceremony for PKI, and Vault is the approved way you do things, console can say, I wanna use Vault as my CA and we'll act as the intermediate. Or we can just cut console out and only interact with Vault. So we'd establish the identity based on console and then issue the certificate with Vault. It's a very cool way to do it. A while ago, Vault used its storage backend in console, but now Vault can actually not use console. So we're trying to take a model of separating concerns from all of our applications. So if you just wanted to buy console, you could use console. If you just wanted to buy Vault, you can use Vault and not have to have all these intertwined application use cases. However you can use, and every one of our applications, we have a lot of integrations. So Nomad uses console, doesn't have to. Nomad uses Vault, doesn't have to. Same with Kubernetes, we can use console or Vault, but we don't have to. We can use Terraform, Terraform and console can integrate. There's a bunch of integrations between all of our applications, but they don't have to. And this is great for us because we have conversations where people say like, hey, if the answer is console, I don't want to have a conversation. We only use Istio here, that's just how we do it. We're only Kubernetes shop, and that's fine because we can talk about Terraform, we can talk about Vault because we've separated those concerns. And that's what we're talking about here. Okay, and security, secure service mesh, secure connectivity, so we're doing mutual TLS. We have encrypted traffic on transit. So if I'm web server A and I'm database B, I want to be able to communicate. So I can create intentions between those two different services that say, hey, least privilege access between these publications. And it looks like for all intents and purposes that application A and database B are only talking over local hosts. So to them, there's a data plane that looks like there is no other network outside of that network. If I'm connecting over those ports, it just looks like a local host connection. This is super helpful, mitigating developer burden to enforce security. So now, instead of me having to go to the network team to issue a CA, instead of having to be the network team to find out where these services are, update our northbound resources, I don't have to do that anymore. I just register my service into the service mesh and now I can establish my identity, get encryption and least privilege access between my applications. And to be honest, the people who understand the application and what should connect to each other the most are the ones who are writing the application. Like I know in my data view app that the only thing I'm connecting to is that database. So that's the only thing I need to create an intention for. But sometimes after you've created it, it's three or four weeks later, the network guy says, hey, what firewall rules do you need open? And you kind of forget and then you have to set things. It just becomes a pain. So if the developer is doing it on deployment time as a part of the deployment mechanism, then things are remembered a lot easier. Consistent security, we can do things like policy as code where, so we can also do a separate infrastructure silos across. So I can actually create a console for different namespaces as well. Just like I did with Vault, I wanna offer a Vault namespace for a tenant. I would have multiple customers in their own console services. So one example is if I had multiple enclaves and I wanted to be able to have those on the same network but separate as far as logically, they are not able to communicate. They're using console as the multi-tenant environment that's serving their own version of that console. And if I wanted to, I could create mesh gateways in between the different networks. So it's basically like I'm creating multiple different networks on the same service mesh. Governance, same thing, policy as code. I wanna make sure that I have additional policies on who has access to what. And that's where we're adding policy as code into the lifecycle of the console service. Delivering zero trust principles to access and communication. If we go back to that, I actually really like this. So we've kind of talked a lot about our applications. If we go back to the SZA Roman Cathedral here, we've got identity, right? So the cornerstone of our security's identity, right? So and how do we do that? We broker different identities with vault devices. How do we issue PKI certificates? We use vault to issue PKI certificates to individual devices. Networks, how are we segmenting network traffic based on the application ID? No matter if we're scaled to a million containers or 40 million containers like we talked about earlier with Nomad, or we're down to one node talking to another. We can use console and PKI and scale out these network rules all the way out to highly-scaled environments. In this case, data encryption. How do we create data encryption? We can use vault to create encryption keys and then allow only specific access to only specific tables or databases or even records based on the encryption or the tokenization keys that I have using vault and vault I have to authenticate with. Again, visibility analytics. I can hit that metrics endpoint on every single application and get telemetry data. I can also do audit logging on all those applications, console, vault, terraform, Nomad. Automation and orchestration we talked about terraform and governance policy as code. So really in the CISA pillars, we kind of hit all of them. So this is kind of the zero trust stories. How do we authenticate and authorize every single interaction? Let's go back to console real quick. Okay, so I want to talk about the API gateway. Let me just play here. Yes, okay, cool. Trusted connections, access control, simplified traffic management. Okay, so I talked about simplifying traffic management. If I had to scale down to two services, no big deal. But when I do scale out to, maybe I have a web server that has 50, like 50 different services in one, 50 different tasks in one service behind a load balancer. To do that every single time I added a new task, I'd have to scale that out and then manage those IP addresses. In this case, I write one rule. If the service name is service A and it's connecting to service B, write a rule between them. And if that's the name, that rule propagates across the entirety of those applications connected to each other. So in my example earlier when we had like a Superbowl commercial and our front end web server had to scale out to a ton of servers, we could do that because we could scale out automatically and the rules don't change. The rule was service A was connecting to service B, just the number of tasks in that service increased which is no big deal. Access control and trusted connections. Again, least privilege, credentials, encryption. Reduce risk by authorizing and encrypting all communications. I think I've said that enough times today. Reduce OPEX cost by gaining a greater insight into networks and managing at scale. So instead of having a million rules connecting to individual IP addresses around our environment, we have one rule for a service communicating to another service, our rule list goes way down. I remember having to manage network firewall rules on a production environment, especially when we started kind of going into the private VMware environment where things were starting to get bigger and bigger and a lot more IPs, that became very cumbersome. Now we can reduce that and we don't have to put in as many tickets. Flexibility to connect applications to any runtime. So am I running on Windows? Am I running on Linux or VM container? I can run on any application so I can run console anywhere. I can run in any cloud across those clouds and I can join them together like they're a big federated environment and I can run them on any application platform. So lambdas, ECS, Kubernetes, OpenShift, you name it, we can run console at the service level and scale out and scale in and use console to do our service mesh. And that's where I'll stop and then we'll talk about boundary. We might not be able to get to boundary so the deal with boundary, there's a whole lab that goes along with it. I've actually added it into this so you guys should be able to connect to that lab and work on it on your own. In 15 minutes I won't be able to get to that. I wanna show you vault and the console agent at this point. Okay, so console is at the heart of zero trust, machine to machine communication. I've talked about the beauty behind it, the PKI infrastructure. In this case we're not gonna use vault as the PKI CA, we're gonna use console as the CA just because there's a lot to go into switching that over and I wanted to show that ability to kind of segregate the concerns of console being its own CA. Normally what you'd have, especially in highly governed and environments with high governance, you'd usually have a vault environment that was established. You'd have some sort of route signing or route CA creation and generation ceremony and that would be recorded somehow and you'd take those keys and you'd put them in a vault somewhere and you'd have two person integrity and all that kind of stuff. In this case we're gonna do it on the command line very quickly but you get the point. Console is a critical component of the zero trust for machine to machine or NPE, non-person entity. So I'm gonna set some variables first. I'm gonna set, so console has the concept of a domain, it has the concept of a data center. So say we had multiple data centers but we were all in the same domain, we could set those separately, right? I could have data center one, US East One, East Gov two or whatever but that could all be a part of the same domain I could federate. In this case we have a one console server. Normally a console cluster is three to five nodes so you could lose up to three nodes and still have a full console cluster and we could federate those between the different environments. So right now we're saying, okay, my console config directory, so where I'm gonna put my console config is console.d and my cert directory is gonna be console.d the certs path. Okay, so I'm gonna set those variables on the command line just so we have them and I'm gonna use them later. The first step I'm gonna do is create what's called the gossip key. Gossip is a protocol that allows for the consistency of data between servers and a cluster. So I have a three to five node cluster and console. I need to be able to share that data between all those servers. So I can use the gossip protocol which is the standard protocol for sharing that data between all the console nodes and if you wanna join the cluster you have to have this key, right? So the gossip key set. Now we're gonna create that gossip key with the console key gen command and then we're gonna store that in Vault but first we're gonna put it in a file here for the console configuration. So I should be able to cat this file. Okay, so now we have this gossip key that was generated by this console key gen command and then just so we have it for later when I wanna update the app server I'm gonna put it on this path, secret console gossip and I'll do away with it for now. So now we don't have to worry about it. It's in Vault somewhere. So the next time this restarts I'm gonna use that same key that was stored in console. Now I should be able to see that. Okay, so we don't have anything in the search directory so it's empty and now I'm gonna create so this is the part where I'm creating a CA. So I'm creating the private key. I'm setting the domain. So normally when you create a CA you do things like here's the domain I'm on. Here's the CRL. Here's the, there's a bunch of attributes about that certificate that you wanna issue across any leaf certificate that I issue. So right now I'm just saying in this case that domain is DCL, I think the domain is what? The domain is set to open GovCo. Look at that. Okay, so we're gonna create the TLS search. So this creates my private key and my public key. And then we're now gonna create the certificates for this specific console server, right? So when I have multiple servers I'd actually create a one, two, three, four, five however many consoles nodes that I have in my cluster I'd create separate keys for them. And then I didn't wanna put them involved and then have that console cluster go access the credential that it's allowed to connect to and use those to connect to the console server. In this case I have one console server. It's waiting on one server to join. So no big deal. And then just so we have it for later I'm gonna write those keys. So I'm gonna, you can see I have all these keys here. I'm gonna put three of them, the public key, the certificate file and the key file just a bundle basically in vault so that I can access it later. So what you see here is I'm saying vault KV put. Remember that secrets engine we used the static credentials, secret console, CA certain key file, right? Key is equal to, now in this case these are keys. So they're formatted weirdly. So I can't just say string is this. In this case I'm saying read in this file name. That's what the at does. So I'm saying key equals at whatever's in this file. It just makes it so it's easier to read and write back and forth. So I'm gonna write those to vault. So you can see they were all in there. So if I wanted to actually vault read that, vault read, now you can see. It looks strange, right? So it's a map of this. But if I was to get this in JSON I would actually format it nicely. And then we're gonna create the console configuration file. So first we're gonna just touch the file. So this is the server.htl is the file that console server is gonna use when it comes up. It's gonna have things like what's the console server name? Is it a server? If it's set server equals false, it's just a client. It's just a client in the giant federated environment. My bind address, client address, that's just what it's gonna listen on. If you had a specific IP address you could specify that domain. This is DC 11 data center. Oh no vice versa domain is opengov.co. And then data center is equal to DC one. The data directory. This is where when gossip is happening all the data that's being sent back and forth between different nodes is gonna be stored encrypted in this slash data directory. And then the bootstrap expect. Now if I had a five node cluster I'd say bootstrap expect equals five. In this case it's a one node cluster. So it's bootstrap expect one. And then this is just to store the PID file. So if I restart, I know where it's coming from. Or what that PID file is. Okay so console connect. So console connect is that intention thing I talked about. Remember I talked about firewall rules. Being able to say that web A is only allowed to talk to web B. This console connect gives us, basically puts an envoy proxy in front of the service and does a reverse proxy between those services and only allows the connection between those two different services. That's console connect. If I wanted to use GRPC I could turn GRPC on. That's what I'm doing in this case. I can use TLS. And then I'm saying hey all that gossip traffic and all my API traffic I wanna encrypt. I'm gonna use those PKI certificates that we talked about. I'm gonna verify the host name incoming and outgoing on every single interaction. So this is the TLS configuration here. And these are the certificates saying verify them and then also verify the internal RPC host name. So are you truly who you say you are? Yes, does your certificate say that you are? Yes. And then do auto encryption. So from a client perspective it's gonna just look at that TLS certificate and auto, if it has the key it'll automatically encrypt that traffic between two different services. And then we're gonna enable the UI. So that's the entirety of console, right? So there's a million other configuration parameters but for this, in this case it's a very simple console configuration. So we're just gonna write that out to a file called the server.yaml except cd.dot. That looked weird didn't it? Cat server.hcl. Okay, it looks like that worked. Okay, start the console server. So first thing we're gonna do because I'm running console as the console user I'm just gonna change the user ID and password. And then I'm gonna start console. System, CTL status, console. Okay, so console's running now. So now I can run things like console members. So this is, give me all the nodes that are running on our console cluster. Well, in this case we have one server. It's this console server. Another thing you can do is console catalog service. Services. So we have one service in the entirety of the cluster and then we have, you can say, give me all the data centers. So if I did have a federated environment and I had EC2, USC1, I had my Azure environment on my on-prem, if I did give me a catalog of my data centers I could list all the data centers here. I could list all the services in those data centers and I could list all of the nodes that are in those data centers. So very nice to be able from an admin perspective to be able to see the entirety of my data centers and be able to see the entirety of my network from one place. And then we now have a UI. So if you're managing the console you can actually see nodes, do the exact same thing I just did from the command line but now from the UI. We have nodes, close that. This is our node. If we had services, let me list it here. And then this is where we can create, we can do all of it here. We can create intentions. Console actually can act as a key value store as well. So if you have a reason to use a key value store I ended up using it a lot for configuration. So if I'm using something like Ansible or some configuration tool, a configuration management tool, I would use console to store all my keys and values for just configuration parameters. So I might have a reason that in US East One my zookeeper node, I would put that in a key value store to say if I'm in US East One, my zookeeper node is this. If I'm in US West One, my zookeeper node is this. I might have different reasons to have different configuration parameters based on the data center that I'm in. Those I can store in console as a key value pair. And I can interact with that key value store just like I do with vault. I can say console, KV, get the path. So it's very nice from a configuration management perspective to be able to see those. Okay, so now we have vault started. We have a vault agent. We have console started. Now we need to start a console client. So we wanna register that data view service. So by using console client to register your local services you can actually go and see where they are, right? Additionally console provides a range of health checks. So when I start my data view service, so when I start console, I can put a little configuration block that says when I register console, I'm gonna register at this port, this health check. Here's the connection string. So I'm gonna pass in these variables again. Okay, remember the key value that I added for the gossip encryption? Or my CA file? I'm gonna write that here on my app server. So I stored it in vault before and now I have access to it. Now also with my encryption file, I got those from vault and now I have them. Okay, so my client configuration looks very similar to my server configuration. The difference here is server is equal to false, right? So it's not server equals true, I'm a client. All the same things, data center, data directory, domain, all that kind of stuff, log levels, retry join. Now I can go into this, but basically I'm saying if I, for whatever reason, come off the network, try to rejoin to this server. Now if I was using AWS, I could use the AWS tags to tell it where the console servers are and I could automatically discover the console servers and I don't have to manage those. TLS certificate, so I'm giving it my private key so that I can join the cluster and communicate over TLS and then I'm gonna verify and auto encrypt the traffic between those. So I'm gonna write that config out and then I'm just gonna start console client on my app server, so this is the data view server. System, CTL, status, console. Okay, so technically all I've done now is started the client locally. So now I should only see in my nodes list that there's a new node. I haven't started, there's no services built into this yet. So now I have my app server and my console server. Two nodes are registered, I'm running the client. The next step is gonna be, okay, let's register these services. Now, okay, it's 530. I can quickly do the last step because I think there's only one more step here and then we can be done. Yep, this is the last one and then we'll be done for the evening. Console connect, now okay, I talked about registering my services. I'm gonna enable console connect on these services and then start up the console sidecar and then to manage TLS. But I'm gonna show you the health check, the service identity, and then the console connect stands up here. So I'm gonna set my environment variables one more time, do a console members so I can see those servers that are now a console server, the ones that we showed in the UI earlier. Catalog services, we only have the one service console. Okay, so now I wanna add this data view service to the console registry. So I'm deploying my application. Alongside it, I have a little configuration file that says the name of my service is data view, the port it's listening on is 80 to eight. The destination, I'm now creating a lease privilege policy between Rebe and the Postgres database and the port that it's connecting on and then the health check that's associated with it. So if I try to get to a list of data view services, it would only give me access to the ones that are answering on port 80 to eight with name data view and then it checks that, the health check every 10 seconds and it times out for one second. So if you have a very short timeout and maybe you get overloaded, you'll start seeing flapping in all your services as they enter and leave the console service registry. All right, so I just created the file data view is gonna use. I'm gonna change permissions to console and then they're just gonna reload the service. So now if I do a console members, I should see the same two servers, but now I have the data view service and the sidecar proxy that goes along with that service. And that sidecar proxy is gonna create the lease privilege access between that and the Postgres database and it's going to manage the PKS certificate. Okay, so now we have this data view service that's in the registry, but it's down. Why is it down? It's because it can't reach the Postgres database because we're not running it yet. We're not running as a service in the service mesh. So we have our nodes and we have the two services now. We have the data view service. Actually, if I click on this data view service, you can see downstream services are unknown. So my service upstream is unknown as well because we haven't, the Postgres database is down. So let's create that service file real quick. We're putting that in the etsyconsole.d directory. So we're saying, okay, we have a new service. We're gonna use Sidecar console connect. The health check is found here where the database is at port 5432 and we're naming it Postgres DB just like we said it was gonna be called from that other service check and we're gonna change permissions real quick. One thing we can do just like terraform validate and terraform format, we can do a console validate. It's saying right now, hey, I read your HCL file. It looks valid to me. Console can also be written in JSON. We have the ability to write in either JSON or HCL. I'm gonna do one more reload and now I'm gonna run my console catalog services and now I've got my Postgres database with the Sidecar proxy. So as we were showing before with the Sidecar proxies are basically a reverse proxy. So now we've created basically a direct line between the web server and the Postgres database and now all I'm looking for is a local host connection when I'm running the web server database. So now I do have to start the console. So actually if I go into the console UI, so it looks like I have a Postgres database but there's something missing, right? It looks like there's something down in this database. That's because the Sidecar is not running. So I have to go start that Sidecar. I'm gonna start the Sidecar for both console data view and console Postgres. So now console catalog services and I should see all my services and they should all be green in my catalog. Data view now looks 100% healthy. I click on the instances, Postgres database, I can see the connections there. So now these are all my health checks. If I were in admin, I was trying to figure out what was going on, I could see what was down and what was up just by looking here and then the health checks that come out of that. And then there are no upstreams from the Postgres database. The upstream from it's the data view connecting to the Postgres database. So that is it. So now we are connecting least privilege, authenticated and authorized. We have managed PKI certificates between all our applications. We have encrypted data between our NPE. We have dynamic database credentials and so we have zero trust from an application perspective. And the next step is to actually go in and use boundary. So we have an intro to boundary so that our humans can connect to services or machines in a least privilege access. But that's in the next one. So we finished both Terraform console and Vault today. Thank you so much for being here. I'm sorry, I went seven minutes over but I'm kind of impressed with myself that I got that close. So very excited and thank you so much for being here. Everybody online in here. Thank you.