 And hi, everyone. How's everyone doing? Last talk of the day. Ready to go? Starting two minutes late. All right. So my name is Seth. I'm the director of technical advocacy at a company called HashiCorp. How many people have heard of HashiCorp before? It's cool. I've been here for three years now. And when I first asked that question, there was like one person, and he worked for us. So it's pretty cool to see growth. For those of you that don't know, we make a bunch of open source tools, Vanguard, Packer, Surf, Console, Terraform, Vault, and Nomad. You might recognize some of the logos up there. But today I'm here to talk to you specifically about Vault and how Vault could not only solve security problems, but particularly the Cloud Foundry Vault service broker that we wrote to integrate with that. But first, I want to talk about the past. So this is less of like, why did we write Vault and more of what problem was it trying to solve? So if you think of a traditional monolithic architecture, like all app, all in one, probably written in Java or Haskell or COBOL, everything was inside the application. So you had like a public-facing load balancer that was most likely an F5, because that was the only thing that existed at the time. And you had some barracudor firewall, because you saw it in airport advertisement once. And your entire application was on the same machine. And all communication happened within that app. Your database, there were no microservices. So you had a physical data center where you controlled everything. And TLS and everything was terminated to your firewall. And basically, your zone of trust was the nodes. Once you were inside that node and inside the application, all of your processes and communication happened inside the application. So your biggest security risks were actual like root exploits in the code or physical access to the data center, someone actually breaking into the data center and removing your rack and stealing data off of it. But when we move to like a cloud architecture or service oriented architecture, we have similar problems, but they're different. So we still have a load balancer, but that's a shared load balancer that's running on shared infrastructure that's probably at not the L3 or L4 layer. You're far lower in the networking stack. We don't have firewalls anymore. We have security rules, which exist more at a software layer than a hardware layer. And we have a bunch of applications that do one thing and one thing really well. And we have to communicate between those processes, typically over some type of network connection or like an RPC call. So the challenge is here is that we're running on shared infrastructure. Do we inherently trust the infrastructure we're running on? How many people here trust their cloud provider? If you say yes to that, you should read the paper about how someone uses CPU caches to escape SSH on AWS T2 micros. You could jump from one machine to another using a shared CPU cache. So do you trust your cloud provider? The answer should be no. We have both internal and external requests. Not only are you communicating with microservices, but you might be communicating to third party services, like databases as a service or like external DNS providers. So routing all of that traffic. Inherently, nothing is trusted. Every request requires some type of authentication. Can this person talk to me? And should I send them a response? And more importantly, we need a break glass procedure. We need a way that in the event that we detect a breach, we detect an intrusion on our network. We think that our data has been leaked. We need a way to shut down as few services as possible to contain that breach. And that's really why we wrote Vault. So what is Vault? Well, at the very basis, it's an encrypted key value store. You can think of it as something like encrypted Redis or encrypted Memcache. You write some data. The data is encrypted in transit via TLS and at rest via AES 256. And you can retrieve the data back out. But we can push it even further and Vault can act as a data encryption pass-through or encryption as a service. So instead of having Vault store the encrypted data, Vault can actually encrypt the data and return it to you in an API request. Meaning, you give data to Vault in plain text, it gives you back the encrypted text. So in this way, it's providing encryption as a service. You store that encrypted text in your application's data store, like Redis or Postgres. And when you want the plain text back, you go back to Vault and you say, here's the cipher text, the encrypted blob. Give me back the plain text. This entire process is authenticated and audited so we know who or what are making these transactions in the system. But again, there are other tools that do that. So where does Vault separate itself? Well, oh, I double clicked. Well, first, Vault can actually perform what I call dynamic secret generation. So instead of just storing secrets, Vault can go out and generate credentials. So I always like to ask this question, how do you generate a Postgres credential? Well, if you work for a startup, you probably Google how to create Postgres credential. You copy and paste some things from Stack Overflow. You get back a username and password. You put it in a text file and you forget about it until six years later when you're on the front page of Hacker News as to why you got breached. If you're an enterprise, you file this thing called a JIRA ticket. You wait six to eight weeks and you get your password emailed to you in plain text. With Vault, we provide programmatic credentials as a service. So an administrator, whether that's a DBA or a security administrator, configures Vault with the SQL to run to generate a Postgres user. Then as a developer or a machine, I make an API called Vault that is authenticated and audited and I get back a unique credential. And I'll show you that in a minute. And we can push that even further. We have authentication generators for AWS IAM. So I don't know how many people here have ever tried to generate an IAM credential, but it involves clicking in the UI 33 times. You can count 33 times. And it requires someone with privilege to do that. With Vault, we can configure it once and then it's just an API call. How many people here love certificates? How many people here have run their own certificate authority? How many people here would want to do it again? Exactly. All the hands went down for those of you watching the YouTube stream later. Vault acts as a certificate authority and it's by far the easiest certificate authority I've ever used. Not only can it validate, but it can generate certs. And because it does so via a single API call, you can create certificates with incredibly short TTLs and krills, and you can rotate them on demand. And because it's an API, you can generate them and only store them in memory, which means even if your system is compromised, the certificate never lives on disk. It's only in memory. And more recently, one of the newer features in Vault 0.7. Something is the ability to actually use an SSHCA authority. So Vault has had the ability to manage SSH for a while, but what we can do now is using Vault CA back end, we generate a CA and store it in Vault and manage it in Vault and put the public key as an authorized key on all of your machines. Then, whenever I want to authenticate to Vault, I sign my own personal public key that I use locally and private key that I use to SSH into my blog and push code to GitHub, I ask Vault to sign that for some given duration of time. And if I'm authenticated and authorized to do that, I can then SSH into that machine with my existing credential. I don't have to generate a new credential or do anything special, and Vault manages all of that for me. And then because we think Vault is secrets as a service, we're trying to push the boundaries as to what we consider security. And how many people here have ever implemented two-factor authentication, which is more known as TOTP? So TOTP is a spec. You can build it in pretty much any language, and there's lots of libraries out there to generate these codes. What Vault can do is it can actually basically scan that little bar code you get and be a TOTP generator. So it can replace something like Google Authenticator or Authy or OnePassword. Now, I don't actually recommend that. Vault doesn't have a mobile app like Authy, so I wouldn't recommend it for something like average day-to-day use. But if you have something like a shared root cloud account, like the AWS root account, you should be enabling two-factor auth on that. And while you should be using IAM credentials as much as possible, there are a few use cases where you have to use the root account. So you can store the root account MFA in the TOTP generator in Vault. Vault, again, authorizes that. You can put policies around it for who can and cannot access that, and the whole thing is audited. So when someone logs into the root account, they have to not only know the username and password, but they have to have permission in Vault to read the TOTP code. But the more exciting part is the Vault is actually a TOTP authenticator or a TOTP provider. So if you've ever implemented TOTP as a provider, you know it's challenging, and there's a lot of things you can do wrong. With Vault, you can easily configure Vault to just be a TOTP provider. So you can say, be the thing that actually generates those codes and validates them on the other end. This means application developers don't have to write multi-factor authentication. You can delegate the entire thing to Vault. So how does Vault work? Well, this is Vault's architecture. At the very top, there's an HTTP API that can be fronted with TLS, so HTTP S API, if you will. All requests to Vault are via the API. There is nothing you can do in Vault that is not an API call, which means if you have a programming language that can make an API call, you can interact with Vault. If you have an API programming language that cannot make an IPA, I can't talk. If you have a programming language that cannot make an HTTP API call, we're hiring. Everything in the middle is a barrier. This barrier is the cryptographic seal around the Vault. No data can flow through that barrier unless the Vault has been unsealed, which I'll talk about in a second. Then there's a bunch of pieces in here. The core bit here is that everything is a back-end, or you can think of that as a plug-in. So we have different back-ends. There's a shared database back-end that can generate database credentials. There's an AWS back-end that can generate AWS credentials. This whole thing is pluggable and extensible. So if you have some crazy service that you have internally, you can build a plug-in for interacting with that service. Out of the box, we support things from Cassandra, MongoDB, Postgres, and pretty much every database you can think of. And on the very bottom, we have the storage back-end. This is where the data is encrypted at rest. This can be the file system. It can be something like SED. It can be something like console. There's a number of different pluggable storage back-ends that you can choose from. So I wanted to distill that unsealing a little bit. One of Vault's promises is that no one person has complete access to the system. And we achieve this by using an algorithm called Shamir Secret Sharing. Basically, what Shamir Secret Sharing does is it allows us mathematically to take a long key, derive it into a subset of keys, such that any or more of those keys can come together to regenerate the original key. What that means is I can take a string like ABCDE, and I can give four people ABCDE, but only two of them have to come back together to regenerate the original string and unseal the Vault. It's very similar to if you've ever seen in a movie where the bank manager and the owner have to come in and turn the Vault key, the physical Bank Vault, at the same time to open it. That's the same way that Vault works. And this prevents one person from having root access of the system, because if there are certain privileged operations, including the initial stand-up of a Vault, they require a multi-man or multi-willman operation in order to take place. So that's enough slides. Let's actually look at things. Who's ready for live demo? Did anyone tell you there'd be a live demo? There we go. OK, so what I have here is Terraform, which is not a talk that I'm giving right now. So I'm just going to SSH into this machine that already has Vault up and running. Look, you get a nice MLTD message. Can everyone see that? OK, so in here, I have a Vault server, and that Vault server is already running. I've stood it up in advance, because this is only a 30-minute talk. And I can show you some basic CRUD operations. And this is using that kind of static secret back end. So I can Vault write, secret foo, a equals b. Oh, I have to off. So let me off. I skipped a step. So now I can write secret foo, a equals b. And I can read that data back out. And this doesn't feel secure. How many people feel secure? It's OK, I always feel insecure. We can read this data back out at any point in time. But as an industry, we've kind of been conditioned to believe that difficulty equates security, that things have to be hard to be secure. And that's not the case. Vault has been audited a number of times by the NCC group, which is an independent security firm. It always passes with flying colors. So we know the data here is secure, but it doesn't feel secure. And that's because the security industry, particularly some of the enterprise security products, have kind of conditioned us to believe that security has to be hard. And it doesn't. We get basic CRUD operations, create risk, create, delete, update, and list. And then I can delete the secret as well. So I can delete secret foo. And now when I list a secret, I get no values found. So this is basic CRUD. But that's not that exciting, because you could do this with Redis and some off-the-shelf solution or build your own solution. So we can use the transit backend to do the encryption as a service. This HDMI connection is like super jank. Come back. OK. So what we can do here is we can write to the transit backend. So this is where we give Vault plain text data and Vault gives us back encrypted data. And the way we do that is by rolling to Vault transit encrypt with the name of the key that we want to encrypt against. And I've created this in advance, and I called it my app. And I give it some plain text data. So the plain text data is going to be foo. But Vault expects it to be basic C4 encoded, because there's no requirement that it's actually text. It could be like a binary blob. So I'm just going to, oh, what did I do? All right. And I get back this cipher text. And now I would store the cipher text in my database somewhere, like Vault. This is offline. Vault doesn't have to see this anywhere. I could put it in Postgres or MySQL and be done with it. But let's say this was like a social security number and or a credit card number. And now I need to process the transaction. So my app authenticates to Vault and says, hey, here's some cipher text. I would like the plain text back. So we write to transit decrypt the same key name. My app is like a sim link to a ring of encryption keys. And we give it back the cipher text. And the cipher text doesn't have to be basic C4 encoded, because it's already. And we get back Zamiya NFC jijit equals equals. And if we basic C4 decode this, we get back foo. So this is encryption as a service. And under the hood, if you have to deal with PCI or FIPS or any TLA requirements, all of this is a key ring. So you can actually rotate these keys on demand. And it'll automatically upgrade or downgrade keys based off of your requirements. So it's not just one encryption key. It's a sim link, basically, to a ring of encryption keys. It's also possible to generate things like database credentials. So I already set this up in advance. So I can read from database creds read only. And this will actually connect to a PulseCrest cluster I have running locally and generate a username and a password. Just a single API call, or in this case, a CLI command. We can check that. You can see that there is, in fact, a user created in PulseCrest. So this really long UUID was generated by Vault. That's a real PulseCrest user. I could give this to an app. My app could connect to the database. In this case, it's a read only user, because that's the SQL I gave it. But this could be basically any user. And you'll notice that it's only valid for a little bit. It's valid until a couple hours from now. That's because everything in Vault has a lease or a time associated with it, a TTL, similar to DHCP or DNS. After that time, the secret's expired. Let me show you what I mean. So let's take a look at the AWS example. So let me quit this. Vault read user. So again, I set this up in advance. I gave Vault privilege credentials. And Vault is now going out to AWS, generating an IAM user attached to a policy that I configured, and giving me back an access key and a secret key and optionally an MFA token if that was configured to do that. These are real AWS credentials. If you take a screenshot and try to connect to the AWS console, you will be able to do that. Oh no, that's a really big security risk. Right now, these are valid for 768 hours, which is about a month. I want them to be available for a lot less time. So I can change that by saying, let me change the lease on these. So Vault write AWS config lease. And I'll say that they're valid for 30 seconds with a maximum lease of five minutes. Now this other credential is still out there though, and it's valid for 30 days. It's already been created. So I have to revoke this credential. So let me read a couple more of these. And notice that these are only valid for 30 seconds. You can see 30s in the lease duration. That means this credential is valid for 30 seconds. At the end of 30 seconds, Vault will make an API called AWS and delete that credential. But that other one's still valid for 30 days, and I leaked it to all of you and to everyone that's watching on the stream. So what I can do is I can just revoke early. The lag here is real. Everything on the AWS prefix. And this will take a second because it actually made a bunch of API calls to Amazon, but it deleted all those credentials. And if I logged into the console right now, you'd see none of these IAM users exist anymore. So Vault's actually communicating with AWS as a service for us. So you could put a UI in front of this, or your applications could directly make these calls, and Vault will just do the right thing. It's just an HTTP API call away. And Vault can actually act as a CA2, so how many people know the magic flags you passed to OpenSSL to generate a certificate? Yeah, like three people. We can generate a certificate pretty easily with my handy little cheat sheet here. So PKI issue, my website, the common name for my cert is gonna be sethvarga.com. And this will generate a cert. So this is a real certificate, a public key and a private key that you could then store in your web server or put into your application in memory or write to disk. So it's that easy. And each time I run this command, I'll get a new certificate with a really short krill and a really short TTL. And I can rotate these certificates over time. So what does any of this have to do with Cloud Foundry? Well, let me jump outta here. So we built this really nice integration with Cloud Foundry called the Cloud Foundry Vault Service Broker. And what it does is it basically masks all of Vault's complexity and provides a unified API using the standard VCAP services for your apps to interact with Vault without having to really understand Vault's architecture. So what does that look like? Let me jump over here. So the broker uses Vault's internal data store for persistence so you don't have to have some external thing. It easily adapts to hybrid environments, safely handles restarts and scaling for you, and it supports additional customization outside of Cloud Foundry. So that leads into some assumptions. We don't assume that you're running Vault inside Cloud Foundry. You could be running Vault in some external service. You could run it inside Cloud Foundry, but we don't make that requirement. Additionally, Vault may be used by other non-Cloud Foundry apps. So you might be in like a very hybrid multi-cloud environment where you're using some cubes of Cloud Foundry, some Mesa, some Nomad. We don't care. We don't make the assumption that it's only for Cloud Foundry. All instances of an application share a token. This is not a best practice of Vault, but a limit of the service broker model in Cloud Foundry. And any operations outside of Cloud Foundry require a rebind, meaning if you change a policy in Vault, we don't push that notification out to the app. You have to tell your apps to rebind to inherit those new policies. So those are the assumptions or the limitations. So what does it look like? We have our Deb who's swimming upstream without a paddle. Our Deb runs something like CfCreateService to provision an application or a service in Cloud Foundry. That then fires a CfCreateService API call to the broker. The broker then communicates with Vault and creates a series of mounts and policies based off of something unique to that. So we create an organization and a space level secret, meaning you can share secrets at the organization layer or at the space layer. We also then create, and the organizations read only, we then create read write for both instance level secrets and transit level secrets, which means each instance of your application can write and read its own secrets, so the encrypted key value store, and it can also provide encryption as a service using the transit backend. We don't mount other things, but you can do that in Vault. This is just the basic out-of-the-box configuration. Then whenever an application developer binds the service to the broker, we make the bind service API call. The broker then makes an API call to Vault. Vault generates a token, which is how the application will actually authenticate to Vault to make these requests, and at that point the flow reverses. So Vault then sends back a token to the broker, which the broker maps internally to the application and instance ID. The broker starts a renewal process for the token, meaning it keeps that token alive because remember everything in Vault has a TTL associated with it. It'll expire at some time. That also means if this broker ever goes down or the application is deprovisioned, that token expires, meaning it can't access credentials anymore. So you reduce the surface area by keeping these tokens to a really low TTL, like five minutes. The broker then generates a binding, which it sends to the application in form of the VCAP services environment. And then from that point forward, the application communicates directly with Vault. So the broker is not a proxy to Vault. The application communicates directly to Vault. And again, this might be outside of Cloud Foundry. Vault might be living on Heroku for all we know. And we don't actually care. And here's what that VCAP services might look like. We give you the address to the Vault server. We give you your token, which is that long GUID. And then we give you the paths to all of your back ends. You don't actually have to know your org ID or your space ID, we just give all of that to you. So all you have to do is parse this JSON and you know exactly what you can hit in Cloud Foundry. So I have time, I have four minutes, but I started two minutes late, so I have six minutes. All right, so let's do this. So what I'm going to do is I'm going to do this instead. So first I'm going to start the logs on my Vault server. So journal, cuddle, dash, f, dash, u, Vault. So this will just tail the Vault logs, and I'll clear that here. And now I'm going to spin up an instance of my broker. And I need my cheat sheet again. It's way too many commands in the world. OK. So I'm going to first set up my end. So where am I? I can't have this up here because it bugs the HDMI thing. OK, so I'm setting up a bunch of environment variables here. So I'll just cat this so that you can see. It's basically un-setting a bunch of mvars and then setting my CF Vault adder, my CF Vault token, my CF username and password. So I'll go ahead and run this. Let me look at those scripts. Set up mv. Cool, so now all my environment variables will set up. I'm going to target a thing that I created in advance. So I have a demo org and a Vault broker space, but you could use the CLI to create those if they didn't already exist. And next I'm going to push up an app. So in here I have a few folders. One of them is a CF demo app and one of them is the actual broker. This is the source code. It's fully open source. It's on the hash group GitHub, but I've cloned it in advance because conference Wi-Fi is terrible. So what I'm going to do is I'm going to go into this directory here. And I'm going to push it up to this Cloud Foundry instance. I'm using PCF Dev locally, but again, this could be anywhere in the world. And I'm going to go ahead and push it up. So I'll push this app up, and I'm going to tell it not to start. And I'm going to tell it not to, or to give it a random route. So what this will do is this will push up the source code that I have locally. It will use the, it'll detect that it's a go binary because we build it in go. It will, oh, it won't do that yet. I haven't started yet. Next I have a script that basically saves me a bunch of typing. So I have a script that basically saves me a bunch of typing, which sets the environment variables on that broker. So these are the environment variables that the broker looks for. It looks for the vault address, the vault token, security username and security user password. And I'm just pulling those from my local environment, which I set before. So I will run this, and it tells me that I have to restage my app, but I haven't started it yet. So at this point, I can actually start the vault broker. And at this point, it will actually download all those build packs, detect that it's a go app, compile the go app for me and start it. So I'll take a second here, installs all the dependencies. And then when this is done, I'll have an instance of the vault broker running in this space. So it's about to start. It's gonna wait for it to start. And notice at the top there, did you see the log line that showed up from vault? The first time the broker starts, it mounts slash CF slash broker, which is the mount point where it stores its internal data. So we use vault in the broker as its own data store. Again, we don't rely on like MySQL or Postgres or anything like that. Vault is the own data store for the broker. You don't need any external dependencies. So now this broker is up and running and it has a URL, which is, when I run vault, not vault, CF apps, you can see that this app is running and it has a URL. I'm gonna copy that to my clipboard because I'll need it in a minute, but this is in its own space. Again, the broker could be on Heroku. It could be in Kubernetes. It doesn't matter. It doesn't have to run in Cloud Foundry, but I'm doing that for this demo. So I'm gonna go ahead and export this as the CF broker URL, which is just that thing. And now we're gonna move on. So I'm gonna go ahead and target a different space. So I have another space called example. So now I'm in a different Cloud Foundry space and I'm gonna create an instance of this server broker. So CSV, which is short for create service broker. I'm gonna name it Vault Broker. I'm gonna give it the username and the password. I can't see. And the URL, which is CF broker URL and something else, dash dash space scoped. So what this is gonna do is this is going to connect to that URL. Again, it could have been a third party. And now my service broker is available in my local marketplace or my org marketplace. You can see HashiCorp Vault is available in the broker and now I can start an instance of the service broker inside my org so that my app can connect to it. So I'm gonna go ahead and create an instance of that service and I'm gonna call it HashiCorp Vault. So CF, create service, HashiCorp Vault on the default or shared plan. And I named it demo vault. So this is a demo vault. So this is gonna create a service instance of that and notice that there's a bunch of logs that took place on the top. So again, if you remember in the diagram, whenever we create the service, we create a few mounts and you can see that those mounts here correspond to the log. So we're creating a CF slash org ID slash space ID and slash instance ID. Now this is item potent though. So the next time we bind a service in this space, we won't recreate the organ space mounts because they already exist. We'll just create the instance level mounts. Again, this service broker doesn't have a token yet. Our app doesn't have a token yet because we haven't bound the service. We've just created the service. Next, we have to bind the service but in order to bind the service, I actually need an app. So let me just run CF services. Okay, the demo vault is running. We can see that it's up and running with the plan and the service that I provided. I also have this CF demo app that I wrote. It's a tiny go binary. If you don't know anything about go, that's okay because it's basically print out the environment variable and then wait for an exit. So it's not a really complex environment. It's basically like echo the M command. So I'm gonna push this up to Cloud Foundry just so that we can see that this thing works. So I'm gonna push and I have to say that the health check type is a process because it's not actually a web app. So this is gonna push that up and spawn that app. It'll compile it using the go build pack, hopefully very quickly because it doesn't have any dependencies and it'll start it. But this app doesn't actually bound yet. It's just running. And if we look at the logs, which I will in a second, you'll see that that vcap services environment variable is empty. I'm just talking while this compiles from source. You can do it laptop. Okay, so we can ask for the logs of the demo app. We can see that it's empty. So this was the log message. There's nothing in there. So the last thing we have to do is bind the service. And what this will do is this will then generate the vault token, start the renewal process and give credentials to communicate to vault in our app. So I'm gonna go ahead and bind the service. So CF, bind service, demo app to, what do I call it, demo vault, demo vault. So that created the binding. Now we can restage the app or we can cheat and just run CFM of demo app. And we can see that the vcap services environment variable is filled from the HashiCorp vault service. And if we restage demo app, this will rebuild it, but that log message will then come back out. So your applications don't have to actually know how to speak to vault. They don't have to worry about vaults leasing or renewal model. They just get a token and a bunch of paths that they can read and write data from in the form of a JSON blob in an environment variable. So we think this abstraction is really nice because it allows your applications to adopt modern security practices without having to think about it, which is really one of the paradigms of vault is we wanna provide encryption as a service as quickly as possible without really providing a lot of overhead. So here you can see the logs from the service and you can see that it found that JSON and your application could parse that and really do whatever it wants with it. I'm over time. So, what time was I supposed to end? 5.20? I don't know. Let me just show you real quick. This is a real vault token. So that thing that I've highlighted there is a real vault token. I'm going to jump over here and I'm going to export my vault adder to vault.hasicorp.rocks and I'm gonna off with that token and little snitch tells me. So I'm gonna off with that token. This is a real authentication that just took place against the public-facing vault server. That's a real token that has assigned the CF-UID policy, meaning it can access things that that policy defines. So I can actually write, I should be able to write to CF slash that thing minus this slash secret slash foo and I get a response back and I should be able to read that data back out. Just cool. I can see that this is working but if I try to do something crazy like write to someone else's UUID secret foo, I get permission denied because the policy that was defined doesn't have permission to write to those other UUIDs and this is a fake UUID but you can imagine it's another UUID in the org. So let me show you that. They're in a policy that was created and this is the policy that vault then the broker created in vault. So it gives, as I said in the diagrams it gives the ability to list everything in the org or sorry, it gives list for the secret, it gives list for the org and the space and then read only for the top level org. And this is all customizable. I just used the out of the box configuration for here. So that's it. I think I'm over time. This is a public repo, all of the code including the Terraform configs to spin up this vault cluster are public. If you have any questions, you can tweet at me. It's just at Seth Vargo on the internet. Why can't I type? It's just at Seth Vargo on the internet. I already tweeted out the link to all of the materials and stuff. It's just Seth Vargo slash Cloud Foundry demo app or something like that, Cloud Foundry with Vault. If you have questions or poor requests or issues, please let me know. Thank you for coming and sorry, I went over time.