 Let's get into the big topic of open source, something that we actually say about it. This is so awesome. We are an open culture that is actually able to express that process that a developer or let's say... How's the Kubernetes ecosystem really going down? All right, and we are live. Welcome everyone to another episode of Get Off Sky to the Galaxy. I will be your captain in this journey. Chris Hernandez here, technical marketing at Red Hat. My co-captain today should be pointing the right... I'm a very special guest here. I should point to that. It's always Married, so I'm not sure here. I have Rosemary here from Hashicorp to talk to us about Vault. Rosemary, tell us who you are here and how you're going to be helping us co-captaining today. Yeah, I'm here to help you navigate the terrifying waters of secrets, aka how do you not accidentally show your password to everybody. I'm a developer advocate at Hashicorp. I've worked quite a bit in various roles, networking, cloud, other. I've been working on a book about infrastructure as code, so pretty much defined my career as a jack of all trades and master of absolutely none of them. But it's been a really fun experience and secrets management happened to be a very interesting near and dear topic to me, because I was that unfortunate soul who accidentally exposed... Oh, no. Well, you know what? It's good that it's one of those things that you learn from your mistakes, right? And then you become a really good advocate. I think the best advocates are the ones who made the mistake before, right? Because you're always... I've been in your shoes before. So again, very excited to have Rosemary on. It's been... The question I constantly get asked all the time is about secrets and get-ups. And since it's a get-ups show here, it's like a topic that always comes up. I was not sure if you were here at KubeCon when it was here in LA, but people were constantly stopped doing hallway things, in the hallway tracks sort of things where they were telling, they were asking me, okay, another stream. So secrets management. And then it'd be the first thing. I'm pretty sure, I mean, I guess besides Terraform, Vault is probably really popular over at HashiCorp, I would imagine. Yes. So HashiCorp Vault is a secrets manager. It's really unique. It's a lot of fun to work with. And in some ways, it's a complicated tool, but it makes secrets easier in other ways, right? Because secrets management breaks infrastructure as code. It breaks immutability. And in many ways, it breaks get-ups, right? Because you're using a mutable approach. You're treating secrets mutably when you rotate them dynamically. And that breaks sort of this declarative infrastructure as code immutable approach. And so it's a really interesting tension between the two, but the whole purpose of Vault is basically to store, manage these secrets, secure them, but also rotate them for you and revoke them according to the rules that you set. Yeah. Oh, that's, you know, we're already getting deep into the weeds here, but yeah, it's definitely a big topic, right? It's definitely a big topic. Always discussed and it's always popular topic, especially with get-ups, you know, deep in the get-ups world, and it's always something that we're constantly talking about. It just constantly, even if you've covered it before, people just keep asking and, you know, it's constantly going on here. So before, you know, I kind of hand the helm here, hand the reins over to Rosemary. I do want to go over a couple of things real quick. One is the CFP for get-ups con is still open. I put the link there in the chat. So go ahead and, you know, drop in the chat. It's open until Valentine's Day, actually. It's the last day. So the CFP closes in a couple of weeks. The, you know, being part of the program committee, I can't tell you is that we're going to have two tracks this time around. So, you know, more of a chance for your talk to be submitted. So please feel free to submit a talk. Rosemary, maybe you can submit a talk. Maybe I'll see you there in Spain. Who knows, right? Secret management, I'm pretty sure. There'll be a ton of people in that talk if you decide to do that. So again, yeah, get-ups con. This is going to be our third one. So, so crazy how fast this has grown. So CFP still open, you know, check that out. And also another thing is that last, last week, yesterday, it feels like last week is just those, these weeks, these weeks have been. What is time? Yeah, what is time? Who knows? Waleed, yeah. Waleed asks, by the way, Waleed's one of our regular viewers here. He asks, does this mean are you coming to Spain? Actually, yes, I am coming to Spain. So being part of the program committee and being part of Red Hat, I'll be there for KubeCon and for this day zero event. So, Waleed, if you're, if you're going to make the trip to Spain, I'll be there. So maybe you can show the cool socks we sent you that one time. We did a release yesterday of OpenShit Get Ops. So I put the link there, the link again in the chat. Just kind of some of the features. We upgraded to Argo version 2.2.2, right? Triple two. We upgraded to the Helm version 3.7.1. And also, I think the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, and also I think the biggest thing is that Argo CD has added the, sorry, OpenShit Get Ops has added, um, health status for deployment configs, routes, and all M operators. So, um, before the, the, the health status was kind of really generic. Now it's really, really more specific. Um, so, um, you can go ahead and check that out. So I know we have a big topic to cover. I know Rosemary said that we'll need about an hour and we'll probably go over. So, um, so, um, so without, without I guess further ado, I'll, I'll hand the reins over to, uh, to Rosemary to start. And if you guys, if you have any questions, go free to drop it in the chat. I'll, I'll, I'll send them over and, um, you know, I'll, I'll step back here because really everyone now is just waiting, uh, to hear about and talk about vault. So go ahead. Um, it's so popular. It's like when a joke. Yeah. Yeah. With one of my teammates is the joke is like the cats more popular. It's like vault is the cat, right? Yeah. Great. Everybody gets really excited for it. Um, yeah. So I guess we, it's a two part series because I told Christian that it was a, it's a lot to talk about vault. And if you don't, or you haven't used vault before, there are a lot of concepts and a lot of terms. My goal is that hopefully by the end of this, you look at some of these terms and they're a little bit less intimidating. You're at least going to identify the workflow, right? So if you're using vault in your open shift cluster or using vault for anything else at the very least, you're not muddled by all of the concepts and the terms that come with working with vault on Kubernetes. Just as a note, vault is a secrets manager. It stores sensitive information. You can choose not to deploy vault on Kubernetes if you want, right? Some folks choose from a security perspective to deploy it separately. You can still connect to a vault server and retrieve secrets, but it's just not in a Kubernetes cluster today. I will show it in a Kubernetes cluster. It will run in its own namespace, but there are ways that you can set up vault that are outside of the Kubernetes cluster as well. And you can still connect to it from Kubernetes. So there's a caveat to all of that. I won't go too far into vault architecture, but there's two pieces to vaults architecture. The first is a vault server and then it's the vault clients, right? So vault server is the control plane. It houses all of the secrets. It manages and stores and encrypts any of the secrets. And you'll see some of that today as I deploy it on Kubernetes. There are also clients and agents. The clients and agents read information about specific secrets so that you can proxy it to an application. So there are no slides. I will show them, but for now we're going to just leave it at that because it's a lot easier to understand it when you work with it and you get hands on with it. So I will start. Oh, go ahead. And then there's also a pretty good question here and I'm not sure if you're going to go over this, but the pros and cons of having it external versus internal, outside of a Kubernetes cluster versus having it internal. I'm pretty sure there's pros and cons, but are you going to kind of go over what those are? I wasn't going to do that, but I can certainly answer that now. I think it's a great question. So there are certain reasons why folks don't want to run it in Kubernetes, for example, right? They want to isolate this onto a certain subset of machines from a multi-tenancy perspective. They want to make sure they lock it down. Vault has a couple of different storage options, right? So when Vault has that information, has secrets, has configuration, it needs to store it somewhere. And so that storage mechanism, that storage backend can differ depending on how you set it up. And so some folks who don't necessarily want to run it on Kubernetes because, well, in Kubernetes we run it, it's integrated storage. Perhaps they want to use something else as a backend. They could use HashiCorp Console as a backend. They can use some other database. So for those who don't want to necessarily run it as an integrated storage option, they choose to run it outside of the Kubernetes cluster, something very predictable from a virtual machine option that they're used to managing. Other times, they're trying to proxy secrets that are not related to Kubernetes, right? So in an environment that doesn't always involve a Kubernetes cluster, you might choose to manage the secrets for many other things, databases, for example. You can, we'll show that today, but you can manage database secrets using Vault. And that doesn't necessarily run in Kubernetes either. So you need some other endpoint for those databases to be able to access that. So there are a couple of reasons why you would run it external from an access point as well as a security point depends on your posture. Now, so why you would run it internal? It's pretty easy to install it on Kubernetes. There's a little bit of friction in terms of upgrade and management, but if you're familiar with Kubernetes and you're familiar with managing and upgrading some stateful components that you're deploying onto your Kubernetes cluster, the pattern is there for you. And it's a little bit easier for you to manage because you're familiar with that kind of approach on Kubernetes, right? So there are pros and cons to both. There is a link. I'll put that float to you when I have more windows and I can find the link to you that describes it in far more detail and some of the considerations from a security and a storage perspective as well as from a Kubernetes perspective as well. Yeah, I think a lot of, I think you have to keep in mind that Vault is a secret management platform almost, right? Independent of Kubernetes. So you can use it with Kubernetes, but you can also use it for everything else in your environment. I know in the way, in the beginning I'm showing kind of my ignorance. I thought it was like a Kubernetes-specific thing. And as I read more about it, it was like, oh, no, this is just like generic. Like, you know, if you have a secret management system, you're going to want to use it, you know, for your entire enterprise. So that's like, you know, kind of something else to keep in mind. Am I using this for Kubernetes? I think that's maybe another consideration. Am I using this specifically for Kubernetes? Am I just going to use it in general in my environment? You know, I'm pretty sure other regulatory things come into place. So I think it's, the pros and cons are more probably more political than anything else, I would imagine. Yeah. And I think it's also, a couple of folks use Vault, want secrets management on edge, right? On edge computing as well. So you can't run, you can't run anything process heavy, right? On edge networks or edge computing. So the result is that it makes a lot of sense to say, let me run the server somewhere else on the edge. If you are using Kubernetes on the edge, I just run the clients and access the information from Vault, right? So there's some complexities to that, but I will not go through because replication is a whole, you know, replication is like a whole rabbit hole. So I won't go through that. I would say crawl, walk, run, right? Versus just kind of get the basics going on first. Exactly. Yeah. So in this situation I will be deploying Vault, the server and the entire platform that is self-contained on Kubernetes. So keep that in mind. Some of the things that you'll see today I'll show some of the secure way to do it, but it's not going to be everything you can do to secure Vault on Kubernetes either. So keep that in mind. Cool. Perfect. So I'm going to start sharing my screen and hopefully you'll be able to see the text. I know. No! No! Oh, the text. This is not helpful. So I could kind of make out that you did get pods so maybe if I start translating it could help out. Yes. I don't know what is going on with this today. What I'm going to do is I'm going to send a live-share to you, Christian and if you open it up on your screen and you see that it's better for your screen share. And at least that way folks can see what's going on. Yeah. So I'll narrate while Christian tries to join the live-share. So the effectively what I've done is I'm using CRC so it's all on my machine. This is not a live cluster so we've reduced any network-related activity. But I've created two namespaces. One is for Vault and one is for my application which I fondly call expenses. It's an expense report API. It's not the most glamorous API but it returns expenses that you log. And we're using that today because expenses is a Java application that retrieves information from MySQL. This is important because this is how we are going to integrate it. Okay. So the first thing we're going to do is call Vault. Vault has a couple of different configurations and this is going to be tough for folks to see. So... Yeah. I'm still trying to join the live-share. Yeah. Let's see. All right. Yeah. Get pod Vault. Okay. This is what happens when let me try to switch screens. Hold on everybody. Let's see it at least. Let's see. Maybe that is better or worse. I can't tell. No. It's still the same. Oh, here we go. It's doing something. Okay. I think I got it. I can share my screen if that helps. Yeah. Let me try that. So... Where's the local video? Share screen. Okay. You know, this is like... I don't know what is going on. We've fiddled with the resolution. We thought it was a stream issue. Okay. There we are. Make this a little bigger. Let me close this right here. Okay. Let's see. If I type... It's terminal. Let me see if I can... Okay. Terminal now. Thank you, Kristen. I appreciate that. Teamwork, right? This is technology. If you could zoom in a little bit more... Yeah, sure. I just zoomed in the wrong screen. I thought I did. It's a little better. For those who are watching, is that better? While it says it looks better. Visibility achieved. We got a badge, right? Yay! Yeah. If you could bump the terminal next to the X on the terminal, if you could press the little arrow. It goes full screen terminal. Cool. Apparently it doesn't want to do that today. All right. That's fine. We're going to work with it. Yes, right. Someone says that your plant game is on point. They're commenting on your greenery back there. It's not real. It is a jungle back there. What they don't see on screen is the flies. There are flies everywhere. Okay. The first thing we're going to do, as I mentioned before, I created two separate namespaces. One is for Vault and one will be for my application. You do probably want to put Vault in its own namespace from a component view. There's not too many components to it, but then again, you want to make sure that you have specific namespaces and Kubernetes. The first thing we're going to do is, well, the very first thing, which is install Vault. Vault runs as a Helm chart. You can imagine that it's a Helm install. I'm going to start the install. That way we can have some of this up. I'm just pinning the version of the Vault Helm chart as well. I'm going to deploy a couple other things. I'm just going to copy this. You don't have to watch me type. There's two things I'm going to be installing today. The reason why I'm installing two things via Helm is that there's two different ways you can inject secrets into your Kubernetes applications. One is called the Vault agent. If you deploy the Helm chart by default, that is actually the configuration it will support. It will give you the Vault agent injector. It will give you the vault agent to inject a sidecar into any application that needs the Vault agent. There's a second way that you can do it which is called the Secret Store CSI provider. What happens with the Secret Store CSI driver is that you don't have a sidecar agent. You can imagine now we're going to have an architectural discussion about why you don't have an agent. These are the two ways that you can deploy or you can use Vault to inject secrets into your application. There's two methods is what you're saying. There are two methods. I'll also get to the caveats of each method and why one may be preferable than the other. Just as a hint, one may not be as secure. There you go. There you go. That's a big caveat. That's a big caveat. At this point, I'm going to go back to files. If you could click the little drop by the terminal. There we go. Perfect. I don't know if you can follow along with me, but I'm going to open up Helm and it will be vault.openshift.yaml under the Helm directory. There we are. You noticed I deployed Helm. That's exactly what I was deploying. The first thing is that the Vault Helm chart does deploy an OpenShift. There are some OpenShift specific configurations. Naturally, one of the values you'll have to set is OpenShift True. Keep that in mind. There's also a couple of images that you can add as well. These are specific to the Red Hat registry. This is pinning to Vault 1.9 and then there's a couple of injector images that you're using here. Vault K8s is a special image. It's used to do authentication and retrieve the secrets for your application. Keep in mind that a lot of the orchestration will come from the Vault K8s binary here and then Vault itself, the Vault binary is needed so that it can actually retrieve do API calls against Vault. There's a couple of images that you'll need. Then one thing that I set that is something unique to this particular values file is I'm setting HA Enabled True. If you use the Helm chart you have two options. You can set Vault in Dev Mode. Vault in Dev Mode is really useful for testing. You can bring it up and you can work on it. But it doesn't give you a good sense of how you operate Vault. There are some security measures that are disabled as part of the Dev Mode for Vault Server. As a result, I'm going to show you the non-disabled True Production not True Production but almost Production Configuration. You actually understand the Vault Operations aspect. As close to production we can get for a demo. I only run it on one node so I can only deploy one replica. I like it says HA but with one replica you're like wait. I have a misnomer. I thought maybe I'll deploy a 3 node cluster and do True HA and then complexity there. We're going to remove the complexity of that but it is a Vault Server. It will run one instance. The other thing it will do is run Integrated Storage. I mentioned before Vault has a couple different storage back ends that you can choose. The one that we recommend on Kubernetes is Integrated Storage. The one that you can use is Persistent Volumes. It writes the information to this Integrated Storage mechanism. That's where you see this Raft and Able True. Those are a couple of defaults. There's this question and I don't know if you know the answer to this question. I don't either but from Paul it says is CacheCorp going to have a certified Operator for Vault? I think right now the preferred method is with Helm but there is not an Operator that I know of in the works. I know there are a couple community Operators out there. A couple of them are for managing and deploying Vault and others are for managing and deploying Vault configuration. Those are community maintained. I don't know if we're going to certify them and I don't know if we'll fully support one yet. We are pretty much supporting the Helm chart for now. Then I have a question. You're saying you're using PVs, is this also encrypted at rest or is that something you have to do on your own? It is encrypted at rest. Glad you asked that question because this actually gets to the interesting architecture of Vault. I deployed Vault and if I look at the pod probably just because our phases are in the way. Perfect. Thank you. When I get the pods you'll notice that there's two Vault pods running. One is the Asian Dejector as I mentioned before. This is what is detecting for any applications or annotations that need applications with annotations that need a Vault agent so it will inject it for you. Then there's this Vault zero. This is the Vault server. It's deployed as a stateful set. You see this Vault zero and there's something that's not very promising. It's zero out of one ready. It's a little concerning. The reason why is that when you first initialize Vault it is something called sealed. You can think of it for lack of a better word, you can think of it as the launch codes for something. Vault, when you first bring it up, anytime you restart it anytime something in outage happens, Vault will come up sealed. It will seal itself and it encrypts all the data at rest. It doesn't encrypt it at rest anyway but this is actually sealing Vault itself. It's encrypting Vault itself. You cannot access that data. You can't access anything for the Vault config until you unseal it. It's a fail-safe mechanism. If someone decides to restart the thing, they can't just go in and access the secrets. You have to unseal Vault and this is the infamous unseal process in Vault. It's, as I mentioned like launch codes. You have to distribute these unseal keys to various parties and the different people who have these keys have to come in and issue effectively a Vault operator unseal with the key. So I'll actually go through that process. It sounds very complicated. I will go through this because it is quite complicated and it's a manual step. You can do it automatically but there are some security reasons why you would not want to do it automatically. It's called auto-unseal. So you have to actually execute a command inside the container looks like? That's correct. So what I'm doing is I'm running a call Vault operator in it. What this does is initialize Vault, the Vault instance. You don't have to run this if you already have data Vault data. You only need to run this if you have a new Vault server and a new Vault instance and a new Vault cluster. I am not going to show you my unseal keys because that would be dangerous. That's right. Yes, that would be very bad. You might actually see some of them but for the most part I'm saving them into a file. I'm not letting them print out to the screen so you don't see my unseal keys. But in you look at, let's say Vault status. Yes, I'm going into the container of the Vault server. Yeah. So the sealed is true. So this means that I've initialized it but the Vault instance is completely sealed so no one can really access it. I can't write anything to it. I can't write anything from it. So what I'm going to do is I'm going to unseal it and the way you unseal Vault involves a command called Vault operator unseal. When you run Vault operator unseal it will ask you for an unseal key in which you shall include that. And when you do, I have pasted it. You can't see it because it was hidden. But when you do that you'll notice that you can unseal progress one out of three. So you can set up Vault to have any number of unseal keys you want. If you decide that you want 10 people in your organization to have the keys and 10 people must unseal it, you can configure that. If you want to, the standard is three is a threshold. So you must have three keys to unlock it. And then you can say I'll have like five totals. So this is a little bit like those access codes where you have to turn the key at the same time. Everyone has to have a key in order to get you. So this is the second key. Yep, this is the second key. And then this is the third key. So once this third key goes in, it will unseal Vault and that way it decrypts my ability to write and read to that storage. Someone says they're like infinity stones, right? You need to collect all of them in order to wield the power of Vault. Exactly. Yeah, it's like, you know what I was laughing because it's the joke that someone said is like you might as well lock these unseal keys on paper in a Vault or in it's actual like in an actual safe. Oh, yeah, it's actual safe. Yeah, and I was like, I don't know if that's like some people do that. So now you'll notice that the sealed is false. So we have unsealed Vault. The process is called Shamir's secret sharing. So if you want to learn more about that Vault documentation has plenty of interesting information about the theory behind it. But when you use this as an operator, the idea is that you have to copy those keys, distribute them accordingly. You choose who gets them, how you want to distribute them, how you want to keep them secure. And if Vault goes down, someone has to come in and you have to achieve this threshold of X number of keys before it's unsealed. There is an easier way to do this. It's called Auto Unseal. I did allude to that before. Auto Unseal allows you to effectively store the unsealed keys in a backend of your choice and something you could do, let's say Google's key management, AWS key management, you could do a lot of different key management options. You'll do is use the key to unseal Vault, right? So when you deploy this on Kubernetes, you can configure it to use Auto Unseal with the key management backend that you're looking for. The downside is from a security view, anytime someone restarts the Vault server on the Kubernetes cluster, they can go in and read the secrets. So there are some concerns to that. You may not want to do it for production, you might just do it for development environments or you might just decide. Your security posture allows you to do that, right? You'd rather have the ability to still retrieve, you prioritize availability over availability of the secrets and the applications over the direct security of the secrets manager. So it kind of depends. Do you find that you said talking about production in that means that there's other environments? Did you find that organizations just use one Vault system for the other environments or do they have individual Vault installations or can... So I guess my question is, can Vault be multi-tenant in a way where it's like, hey, handle many environments or do you just see people just installing one for each environment? So it depends on the open source or enterprise version. So the open source version most of the time I see folks deploying separate Vault clusters, one for development and one for production. I still like that model. I mean, yes, there's a it's kind of dependent, there's a cost to running multiple development environments and Vault but if someone just manages to compromise key, it's reducing your blast radius, right? If someone compromises the secrets or keys in development, they're not going to affect production. You can still evolve them, rotate them, manage them independently. There is a way to do in Vault Enterprise segmentation by something called Vault namespaces, not the same as Kubernetes namespaces. It's Vault namespaces but what it will do is isolate certain secrets to to certain parts of Vault, right? So you can control access control to that as well. And that's kind of, I guess, the equivalent of multi-tenancy. There's also Vault Federation, so also part of the Enterprise set of features is involving replication. So in that situation, you may decide to synchronize multiple Vault clusters because you have a very large secrets management architecture that you want to synchronize. But for the most part in the open source version, it's probably better to deploy separate Vault clusters for different environments. Gotcha. There's actually a couple really good questions coming down. So I'm actually going to what I'm going to do is I'm going to kind of sanitize this question a little bit or ask it in two parts. So usually you talk about Vault with secret management and managing secrets for Kubernetes. Can it also manage things that config maps as well and not just secrets? No. It is not a replacement for config maps. You can do it. There is a feature. So the basis of the Kubernetes integration for Vault is a neat, very little for some reason not as well known feature called Vault agent and Vault agent is almost like a templating tool. So you could use Vault agent to template out configuration and then somehow synchronize that to a config map. But that would have to be outside of an existing integration. It's not part of the Vault Helm chart. And the secrets portion Vault doesn't it is I guess a replacement for the secrets for Kubernetes secrets, for example. Mostly because the way that Vault is treating the secrets, it's mutable. So the general principle is that it avoids writing the secrets in plain text or writing even the secrets encrypted to Kubernetes. So that's why you would prioritize Vault agent. Now the exception is CSI driver and that was what I was mentioning before. The tradeoff to CSI driver is the CSI driver does write it to a Kubernetes secret which means that you can see the password and username in plain text. So the result is that it is, if depending on how you run the Vault integration on Kubernetes, it is a direct replacement for Kubernetes secrets. Other times it's not it's plugging into the API for it. So I guess it sounds like it depends in terms of with respect to secrets. The other question is a really great question. I'd like to hear your thoughts about it is if someone's on a public cloud for example AWS why would you use something like Vault versus AWS secret manager? Yeah I get this question a lot. You know and this is the where you have to make this decision from a level of effort point of view, right? AWS secrets manager works if you're using AWS. Right? If you're using something like Amazon ECS you know you get it easily integrates with secrets manager or secrets there. But the downside is that secrets manager doesn't really rotate secrets for example it's not going to rotate and handle the issuance of access keys or STS tokens that's a separate service. In Vault a lot of people favor Vault over the cloud providers native secrets manager because it works for a lot of different platforms. So you can do it with databases whether it be from public cloud otherwise you can do it across different public clouds you can use it to manage from Azure service principles to GCP service account keys to AWS access keys you can use it to manage certificates so Vault is pretty expansive for a lot of use cases which is why people who are looking to manage their secrets outside of a public cloud of a single public cloud or a single vendor can enjoy it as a tool just because it helps ease some of those workflows across multiple platforms. Yeah and here at Red Hat we're all about the hybrid cloud and that kind of just fits in that whole aspect of hybrid cloud and not being kind of tied down using a platform to kind of move your workloads and also as you alluded to and I said before there's a management system so you have to think about rotation as well because there's a lot of people talk about secret management with respect to encrypting your secrets or having them not visible but you also need to rotate them and that is also a level of effort remembering to rotating database passwords is actually a pretty heavy task because there's a lot involved in rotating that passwords from the actual database itself all the way up to the application layer and doing that for you is very important. Yeah and I did notice in the chat Cole pointed out that it can be used for more than secrets management and that's true you can use it also to encrypt and decrypt data for example data in transit and data at rest too so there's a lot of things to fault that you can use some people don't a lot of folks don't just use it for storing secrets although that is something that does help Yeah that's a big part of it too it's important. Yeah exactly. Alright so I'm not going to run this command because it will show my token Vault is secured by a token so now that we've unsealed Vault we should now be able to store secrets and set up Vault right I think that's the national next step so what I'll do is actually start setting up Vault by CLI I just paste this command I will not run it and this is going into the Vault server and what I'm going to do is just open up a shell so that I can start configuring Vault some of the Vault components that we'll need I won't go through every Vault component but I'll show the two main ones that you're going to need for Kubernetes the root token is your access key it's like your password to get into Vault so thus I will not show it I will try my best not to show anything today that is my goal that is always the goal it's always when you accidentally show your AWS secret key and access key you're like well I guess I need to rotate those now exactly that's how I was like I don't want that anxiety so I'm in the Vault server there's something important to recognize about Vault Vault has two main plugins so if you're familiar with let's say Terraform has something like providers the idea is that Terraform itself is a server and then there's a client the client has all the logic so the providers are the ones that capture everything that you're going to need to do with the target API similarly with Vault Vault has a plugin ecosystem there's the Vault server so Vault can store things it can rotate the secrets but it doesn't know how to rotate let's say a database vault in itself can just give the function but it can't do anything so every bit of logic that you need to rotate a secret to manage it to store it is captured or built into a plugin and so there's two types of plugins for Vault authentication methods and secrets managers we're going to first configure an authentication method an authentication method is exactly what it sounds like it's a way for you to authenticate to Vault a token is not a great form of authentication sharing the tokens it's just not very convenient for anybody if you have lots of different people like hundreds of people using Vault you have hundreds of applications using Vault you don't want to be distributing tokens left and right so Vault plugins have this layer of abstraction called an authentication method that handles authentication of some identity to Vault and that will allow that identity to read secrets right so that sequence is called an authentication method today's authentication method uses the Kubernetes authentication method which uses the service account JSON web tokens so you can imagine it's very much built into it's very much built into the specific target you're looking for but we're using a service account to help an application log into Vault authenticate to Vault so does it create that that Jot on the fly so like it's essentially or is that per service account does it manage that as well I guess is the question it does not manage the Jot and that's a great question it doesn't manage the Jot it doesn't issue the Jot it links the Jot to effectively the OIDC authentication flow within Vault provider yeah exactly so first thing you're going to do is enable the auth method everything you do if you enable an authentication method is Vault auth enable something so in this case I'm I'm enabling an authentication method at Kubernetes and then what I'll do is I will actually copy this because it's a very long fan but I will create the config for the application method oh geez all right we'll go through this it was a really long it was a really long string by the way everything in is in the repository I believe it will go in the chat but is in this repository I just dropped it in the chat so excellent okay so first thing is that I need to get the service account token you can do this outside of Vault you could use Kubernetes right in the case of Kubernetes you would have to find the right service account in Vault and then retrieve the token this is just an easier way to retrieve the direct service account token that Vault server is using you also need the Kubernetes host and you also need the certificate the certificate authority here so all of these things need to be added to the Kubernetes to this Kubernetes config in order for something in this case Vault to authenticate from Kubernetes to Vault server so some service account must authenticate the other thing to keep in mind is depending on the Kubernetes cluster you're using the issuer might change so definitely check the issuer yeah I have made that mistake a few times okay so we're going to exit I've already configured the Vault Kubernetes authentication so this will allow me to link any kind of service account to Vault right it needs a couple of permissions the Helm chart sets them for you so that's good you don't have to do it but I believe there's a cluster role binding related to token review but I don't entirely remember so there are there are things that you will need to give access to Vault in order for this to work and it should be all built into the Helm chart okay so that is an authentication method should I pause and let's see if there's maybe anyone there was a question about Vault rotate secrets only database AWS secrets what if I need to rotate a token that's a good question what if I need to rotate a token I guess which specific token but I guess since it's just a generic management system I guess you can rotate tokens I guess there would be need to be a plugin for that so that's a yes so it would need to have a plugin so it depends on the token it there is a it's called the job one is called the identity secrets engine so if you have jobs then it would be the identity secrets engine for other tokens like you could say maybe a console tokens there has to be there is a plugin for that so there must be a plugin in existence in order for Vault to rotate the token if you can't find the secrets engine it's likely that there's a generic protocol like OIDC etc that it may or may not be supporting so you can also look into that too but it depends on the token and depends on the target API cool let's see here I don't think there's anything else yeah so I did for those who missed it I did post the link to the repo that she's following along so if you want to take a look at that it's in the chat yeah it is a very I know this is very long and a lot of explanation this is where Vault this is where Vault we started we configured the authentication method so that way a service account in Kubernetes that we say is okay like we have to tell Vault this Kubernetes service account is okay to authenticate to me so as long as we allow as long as we configure that in Vault then we don't have every service account in Kubernetes authenticating to it right so we have the authentication method configured the next thing we're going to do is configure a secret the first secret we're going to configure is the root database token and this you may accidentally see so we're going to I will be very careful on this but I can't guarantee that you that I won't accidentally print this out so Vault has a second type of plugin called a secrets engine the secrets engine holds the logic for the renewal and revocation of certain credentials someone else usually either HashiCore developer or the community will have built this code to handle the renewal of a secret or revocation right so they delete the secret or they create a new one all right so the first kind of secrets engine we're going to enable is a key value store this stores a set of keys and it stores values response oh yes that would be that would be you yes that would be me can you switch to the other terminal I think yeah yeah there we go okay so I started Vault port forward you can set Vault on a load balancer I just am running this locally so it's not really helpful um oh yes one moment please there is like export something yes I have to export the Vault token um so what I don't show you is that you have to export the Vault token and you also have to set the Vault address for the CLI to connect so just keep that in mind oh no here we go all right 403 what does it want now well maybe we can check the I was wondering if we can check the logs on the everybody sorry close your eyes we're destroying this so okay all right there we go all right I told you one one time or another we were gonna have it yeah it's gonna bounce it happen all right so the next thing we're going to do is that we've enabled the secrets at expense static so Vault works on API paths so what you're going to do is enable them at certain API paths in this case I have said I have a key value store at expense static and what I'll do is I will store a MySQL database login password at expense slash static slash MySQL so that's the key and the value you'll notice they created time so I've created it and now I'll get the secret yeah we're here here we are all right so you'll notice now I'm not just saying that I got I stored the secret I have the secrets stored so you'll notice that there's the database login password so anytime someone needs to get it they go to this path expense static MySQL the other thing that you want to do when you start configuring something like this is you want to configure a policy as I mentioned before you don't want everybody accessing every secret right you want some kind of access control to make sure that no one is going to go and retrieve secrets that they're not supposed to it's like Kubernetes RBAC so there is an independent ACL system in vault that works similarly to Kubernetes RBAC and uses effectively API authorization and it checks for whether or not someone has the policy to let's say read update create or delete a secret or a secret path so you can specify that so here's a policy what it will do is allow me read access to the root password that I put into vault and then what I'll do is I'll create a vault policy for that so it's vault policy right and now I've created a vault policy what this vault policy will do is restrict any identity attached to the policy to just the database root password it what I won't be able to let's say get someone else's password or get some other teams password I can only access this database password for example okay and finally the last thing that we're going to do because what we're going to do is we need our we're going to create a database on Kubernetes gasp we have this static secret we're going to use the root password to configure to create the database on Kubernetes so what I'll do is I will do a very very very large command and what this does is bind a service account the expense database MySQL this is the service account my database will use in Kubernetes it binds the namespace so in this case expenses and it adds it and attaches it to the policies so if you're familiar with most of the cloud providers you're attaching the policy to an identity so in this case my identity is the Kubernetes service account and the namespace and I'm attaching this policy that only allows me to read the root password and the and the TTL option what is what is that how long the access is available for is that what that is yeah that's correct so the authentication flow for Kubernetes for Kubernetes is that once a service account authenticates vault issues a token just like a token the root token that I accidentally printed out but the root token not the root token the vault token is there and it gives a certain period there's a certain period of time in which that vault token is valid or available so it leases you can say vault leases that token out to this identity the expense DB MySQL service account for that period of time you have to have the service account reauthenticate so the service account isn't using that token right it's not retrieving any kind of service any kind of passwords or credentials then it just sort of expires and then you have to reauthenticate again right but that's usually handled on the side of the the vault client in Kubernetes for the most part you know it kind of just self does this it's very much automated but what you'll see as a result is that if someone gets the vault token like they exact into a pod and they got this vault token from the the service account for example it would expire within the hour got you yeah so it's all about ephemeral secrets which you can see where the immutability starts yeah yeah so that's yeah exactly and like you know you know in get ops everything is about immutability right and so like this is like kind of like where the contention is I think this is also where you kind of I've talked to a lot of people actually one of them Christian post I don't know if you know him from solo it's we were talking and it's like it's really the point of demarcation right and it really there's like and also there's a point of like security practices as well right if so there is that contention right with get ops and it's like where is that and I always say that if you are it's a point of demarcation right like you're not storing pod definitions in a get repo you're storing a deployment right so like you're not actually storing every little single piece right so like it as long as you're storing to some reference that gives you the functionality that you want I think you're like at the right path right at least you're going towards that path of that Nirvana of get ops right so the way I think about it to myself is that the idea behind get ops was to modularize certain automation piece pieces of automation in your infrastructure system into different controllers right like the idea is that you want to separate this should you for something like a secret that's sensitive information should you be managing it with the same controllers that you're using for deploying an application for something else and maybe the answers yes me and most likely the answers no you know it's managing a different kind of life cycle of that particular infrastructure secrets are effectively so it's managing a different part of the infrastructure in which case perhaps from a separation of concerns view it's not really breaking it ups it's perhaps just a different controller or yeah yeah yeah exactly yeah yeah you're delegating that right you're delegating that you know we I have the same conversation because red hat right we're all about operators right and you know operators has a lot of automation in it right and so it's like okay well where's you know it's like well really you want as long as you get a function as long as your cluster is functionally the same as it was before right this is a disaster recovery that does it really matter if it's a hundred percent inside your get repo right we're talking about Alexis actually from we've works said that like the idea is to have a functionally that the clusters functionally the same not necessarily exactly the same right and so as long as like this database is back up and running why do you care if the password is the same or not so it's right right like you know that's what I say do you do care of the passwords the same or do you care that their clusters up and running right yeah I think but I think it's a ladder right so yeah exactly from from yep and it's all about perspective right it's invisible it should be invisible and then when it breaks it's visible and everybody gets you know gets a little concerned yeah right so what I did behind it just because so that it takes it gets a little bit of time to create I created a mySQL database on Kubernetes and the reason why I wanted to show this is at least you got the example of a static secret right this is the something that no one's rotating it I mean I guess I could just keep it in involved and never rotate it which you could do that that's static secrets right so not every secret has to be rotated as you pointed out Christian you know you may you do really care if it is or isn't but from a security perspective you'll eventually have to rotate it in this case what I've done is create the expense database and it's retrieved the base it's what it's done is retrieved the database password root password from vault and the kind of way you can show let me see if I can show this a little bit easier but you'll notice two pods one pod is the database and the other is vault agent so see if I can get the logs from agent and the agent injector created this lovely little snippet patched it in and what it what it allows this to do is to create a sidecar container for vault and this vault agent just runs and renders out some information about the database password so what vault agent will do and this is a bit of the workflow that I do want to show and I don't know if we can do that we won't be able to do that but what happens is that the sidecar request the database password from vault and then vault will go to the database and say hey database can you create me a new username and password and then you'll come back and say here you go that's a more complicated flow and I'll show that one next but in this case it's really just saying database comes up and says hi vault I need a static password so that I can start myself up and so then it will start vault agent, vault agent writes it to a file and then you can use that file for the application in this case mySQL it's a lot of a back and forth workflow but it's a lot easier to see it actually in the application itself and not the database all right so in the as as I'm reading this question we actually already answered that you can't really save config maps but there are there may be plugins upstream that do that but my question is you talk a lot about a sidecar is this kind of like something like service mesh where there's a sidecar in every container or only on the ones that need it this is you can configure it on you can configure it on you can't configure it to every application you can part of the vault helm chart the agent injector you can configure the agent injector to inject no matter what inject by default by default however it will it will only do it if you have an annotation so if you yes you're following excellent so the annotation that is important to include is agent inject true and that will inject the sidecar by default it won't do it unless you add this annotation it will do it across namespaces as well and the other important sets of annotations which thank you for reminding me to show the annotations the other important annotations that you need have to do with vault config so remember we created that role that said anyone who's expense db mysql can connect to it well this is the role expense db mysql you also need to set the service account name to a match the one that you bound to vaults authentication method so remember previously we said if your service account is expense db mysql and your namespace expenses you can authenticate to Kubernetes and sorry authenticate from Kubernetes to vault and retrieve the the static database password you also need to specify a couple annotations about where you're getting the secret so in this case I am getting the secret from mysql path expense static db mysql and this is where vault agent and configuration kind of mixes a bit so someone asked the question about whether or not this replaced open shift config maps and secrets this in itself is kind of like configuration right so you could what I've done before is I've rendered a bunch of different configs static configs and use the vault agent template to build it and inject it as a file and pass it to the application in this case all I'm doing is exporting the mysql root password and using some templating commands to retrieve it from vault so while this is a little confusing the idea is that you can template configuration whether it be for application or for some other target that needs to use this password and then you can use that to capture whatever config you need I'm glad you showed these annotations because it kind of makes me think of two things one that's like if you're since this is the deployment you're actually storing this and get and so this is the important part to store and get like how this application retrieves the secret is part of get ops as long as vaults configured correctly when I apply this if the axis is configured correctly when I apply this it'll have access to that secret so you can technically let's just say instead of exporting this you can technically have written this to a file because this is just a go template and that's technically your config map settings even though it's not directly a config map you can write out a configuration if you want so this is actually really cool I will show you Christian you're exactly right because that's the example I have for the application side this export is because the database it's a database so it can't really set the secret so the hack way to do it not hack but kind of the workaround is to pass some args and source this file this is actually the file that it's writing to so it's writing to a file that says export and so it will source it and then run the entry point for MySQL what you're mentioning is actually what we're going to show but we won't deploy it because of the sake of time but similarly what you mentioned before was that you could write a configuration any kind of configuration you need like a config map you'll notice this is a little bit a spring application properties file it's got zipkin enabled you've got a data source that connects to expense dbMySQL you've got the port the one thing that I've done is render out the database username and password for the application to use to connect to MySQL database so it writes it out to application properties any kind of config file that you want you can write it to and the application can pick up the information it needs from that configuration file we've got a question here this is actually pretty cool and this was actually a question I had so when a password are updated or rotated or whatever when your secrets are rotated I guess the question is can the deployment be triggered automatically but I guess the question is is it triggered automatically if not can we do it I wish so it does not get triggered automatically that's one of the nuances to dynamic secrets and it doesn't matter if you're doing it on Kubernetes or you're doing it on the platform your application has to support some kind of hot reload right once you write out the file it has to be able to detect that the file has changed and then reload itself which some frameworks will do it and some frameworks won't so there's not really unfortunately there's not really a way to trigger the application to restart itself after the secret has been rotated and now it's been rewritten Vault agent, so the sidecar will detect and identify changes to the secret so if Vault says you have to rotate it now Vault agent will detect that and retrieve and automatically render a new config file for you so Vault agent will automatically do it but Vault agent isn't able to poke the application and say hey you need to restart because I have a new config file for you you could write it it requires a couple of other pieces of automation but out of the box if you're doing the agent injector agent injector approach it will not reload the application for you gotcha it'd be nice to have like a like a hook like a sync hook or something like hey run this job after you rotate the secrets and you can hit your endpoints or whatever to reload the application someone asks me and I like to answer it, it says is Vault better or worse or equivalent to built-in OCP secrets well since OpenShift doesn't have a secret management system I would say Vault is better just because of the fact that OpenShift doesn't have that you could encrypt your secrets at rest or let me rephrase that you could encrypt at CD at rest with OpenShift but like a management system, OpenShift doesn't have any which is why something like Vault is important to have especially if you you don't want to do all this stuff manually like if you don't want to the biggest thing some people ask me and get ops I always say crawl, walk, run the immediate answer is sealed secrets because the point of entry is so low it's not so steep but you'll find as you grow it's really hard to manage it doesn't scale well when you're at a certain size you'll need a management system and luckily Red Hat has partners like Hashicorp that you can actually get support now on both ends you have the support for your container platform from Red Hat, you have your secret management support from Hashicorp so if you're running a large enterprise something that's really important you have that end to end so hopefully that answers the question there whatever works that's an intervention like you know crawl, walk, run the vault if you haven't noticed vault's workflow is quite complex you know it's not great when you're first starting and you need to get the secret and you need to just get it running you have to take the time and go in and secure it so it's worth it but it does have some complexities and you have to figure that out alright so for the sake of time I'm going to expedite the next configuration a bit but we configured a static database root password so now what I'm going to do is configure dynamic dynamic credentials so dynamic credentials work differently than storing it in key value key value is here's a key, here's a value you, the operator or developer are the ones to figure out if you rotate it in the case of certain secrets engines certain secrets engines will handle the rotation for you so what I mentioned before was that you could rotate any token you wanted if it was supported by a plugin right so these plugins are built with the purpose of revoking and creating new kinds of credentials and they have a lot of logic in them to do that because you don't want to just rotate it and then something breaks so there's a lot of logic so the thing that I've done is you'll see very similar workflow I enabled the secrets engine at the expense database MySQL path for a database secrets engine and the database secrets engine has a couple different configs you'll need, you have to tell it which plugin you're going to use, yes there are multiple plugins but they're all bundled under the database type so there's a plugin for MySQL, PostgresSQL, there's lots of other databases that you can use but you do have to specify the name of it and most of them ask you for a connection URL so Vault needs a way to access the database because it's using a database creation statement to create the user so you need connection to the database you also need a username and password with the ability to create and renew users revoke users depending on the kind of workflow you want so this username must be an administrative username and administrative password Vault can rotate this for you, fun fact we will not do that today but if you pass it in Vault can handle this internal rotation for you if you tell it to rotate the root password so you can tell it to do that there's also the roles, I'm only allowing the expense role so this is tied to any secrets related to the expense application that I'll deploy and you'll notice once again here we are the root username and the root password I think it's kind of cool that it can revoke itself it rotates its own access so that's pretty cool it's pretty fun, it's unexpected it's very nice workflow because you can just say I'll just rotate my root password because I accidentally printed it out in the connection string that's pretty cool it's almost like hands off you've got a hands off management for that the other important piece to take a look at if you can scroll up in the terminal actually that would help sorry, not in the code sorry, this guy there we are so you'll notice there's something called a creation statement there it's kind of truncated by the terminal but the creation statement is what Vault will use to create a new user so you'll have to configure something called a role and that role will tell tell the database whether or not it should create the user revoke the user, renew credentials everything there so the creation statement is a SQL statement as you can recognize so just be sure to get the right SQL statement and once again we set our time to live for the database username and password to one hour with a maximum of 24 hours so it gives a little leniency and then we write once again a policy the policy restricts the expense credentials to only expense so I'll do a policy there so I can only read from this path which contains the database username and password not the root this is the dynamic one so Vault will issue it for me and so then after that you'll scroll up well anyway I'll paste this so that way you can see the command but here we go and once again we bind the service account so this will say only expense service account in the expenses namespace can access whatever is defined in this policy so remember the policy was expense database my SQL creds read that's the only one they can access they can't read the root password that we stored in key value so this means that the expense application can only access one secret and that would be the username and password for the application to access the database and it's a dynamic secret so it is not actually how do I put this it's not the root it's not the root database username or password Vault will actually go to the database to the MySQL database and say hey can you give me a new username and password and what you'll see is that it comes out with this unique username and password I could run it again and it's a different username and password yeah and so what Vault is doing is it's saying okay anytime you read from this API endpoint I'll issue you a new username and password I'll lease it to you for a certain period of time which is an hour you can opt to renew it but if you don't renew it I'll just revoke these credentials from the database and you can't use them to log in anymore so that's pretty cool so then essentially it's like I need access to this database so then as an operator I can make a policy of anyone that wants to access this database and do read for example if you want to run a report or something but you only have access for an hour and I'll revoke that for you yep exactly and if you wanted to do it again if you still needed access you have to renew yourself you have to reauthenticate and renew your access so it's a very useful way to limit the time to live or the life cycle of a given secret and why this is important to understand each pod in Kubernetes gets its own username and password so if that pod somehow restarts, stops or gets redeployed vault issues a new username and password so if you log into one pod it's not the same database username and password that you'll find in another one another pod so if you have 10 instances it'll be 10 different username and passwords exactly all right so I know we're coming up on a very long stretch so what I'll do is I'll move to I'll move on and you can deploy this with VaultAgent which is what I originally showed so agent sidecar was deployed injected a patch to the deployment and then you saw that VaultAgent running as a sidecar there's a second way that I mentioned that you can do this called CSI Driver it's a project that uses the container storage interface but what it was what it is is an interface specific to secrets to storing secrets the main difference however between this and the VaultAgent is that it does not well CSI does not inject an agent which means that you don't need hundreds of agents running everywhere instead all you have to do is deploy a CRD as well as two sets of controllers one is the CSI secret store driver which is what I'm updating right now and then the as well as the Vault integration the Vault driver so the Vault provider for it so there's two pieces to this which I'll show I'll get pods then Vault so the big difference if this will come up today one of the problems with this is that you when you run this on OpenShift CSI is not recommended you can but it's not recommended and so the result is you actually have to add additional privilege policies and that's always in for those who are watching anytime you see the word privileged in your SEC that's like a big warning make sure you know what you're doing there's a difference between a pod running as privileged and like you should avoid privileged unless you absolutely have to right so like things for like storage you kind of have to right because you're like actually accessing the underlying hardware of the of the node that's currently be running on so yeah and that's where it'll trip you up you know you don't really want to it's not ideal we can all acknowledge that it's not ideal but yeah when you run it as privileged nothing will get created until you give that access which I might actually have to uninstall this unfortunately in order for it to run so if anybody has any secrets as to how to get this to maybe I should just remove all of them yeah just delete the Damon set yeah just recreate rerun the helm so it's a little bit faster because otherwise it will sit here and wait until it has the opportunity okay so let me just get a side deploy so deploy the driver and then you have to deploy the vault provider both have to happen both have to have elevated access your application even needs to have access as well but I'll show you why the CSI is a little bit different while this deploys if this doesn't work we'll be able to show you the manifest so the CSI driver does not inject an agent and the reason it doesn't is because it's using a CRD instead and the CRD that you need to deploy and have available is the secret provider class so what the secret provider class does is that it reaches it uses the secret store driver as well as the vault provider it reaches out to vault and retrieves specific secrets in the case of the annotation I said let's go out and get the expense database MySQL creds expense these are the ones that are dynamically issued by vault it's the funky username but the funky password and so similar kind of configuration I tell the secrets provider class to retrieve it from the API path and then put it as a Kubernetes secret so this is where it gets a little bit this is where the secrets itself the secret object it's kind of a decision that you want to make but it will put them into Kubernetes secrets which someone can access it's not still secrets it's not still secrets so again the secret in Kubernetes it just describes what it is not what it's doing it isn't hiding anything from you so if you have something in vault sealed in vault but you're injecting that into a secret well then that's in clear text you just have to keep that in mind you have to keep that in mind when you're using the CSI driver I think you're on mute is that just me am I mute sorry about that if you are if you're running a lot of applications and you don't want sidecars then this could work you do have to have elevated access to the application you have to have elevated access for the provider and the driver it does write it as a Kubernetes secret so let's hope this works and it may not be a big deal it just depends the environment because sometimes production it's so separated from development that only few people can log into it anyway so it might work for you in that case when not a lot of people are accessing the cluster yeah exactly alright so sorry you'll notice that there's a new secret that came up and it's opaque I'll just show you so there's a new secret that came up and that's because the secrets provider class is synchronizing the information from Vault to the Kubernetes secret and well you can you get the username and password if you bring up the terminal a bit but the idea behind this is that if you are using your application and it just needs to read the secrets from it well you pretty much have the ability to do this out of the box if you go check out this one you'll notice that I can mount the secrets provider class as a volume and what this will do is allow me to also set this as environment variables so you can still use the same pattern you can still use the same approach as what you would expect from a Kubernetes secrets with some additional custom resources so question is if I have three different applications in the same namespace and I want to insert different secrets for each app then do we need service accounts in order to authenticate? Ah, that's a great question so no you don't, you can use the same service account as long as you configure I'll show you the line, as long as you configure your spec to include the service account name that you need you only need to create the one service account to be able to access Vault now if you have three different secrets for each application from kind of a risk mitigation view it's probably better to create a service account for each secret and then bind a policy specific to that so say you have expense, a report, and a database and they all use different kinds of secrets it's better to just bind different policies to different service accounts and then have them use different service accounts but there's nothing stopping you from using one service account for all through the applications you just allow the policy to allow read access to certain secrets that you need but keep in mind if expense it would allow expense for example to access the root password if you allowed three policies to be bound to one service account yeah, yeah, there's always risk mitigation right, it's like from a technical standpoint you probably can do a lot of things but you need to do that risk analysis right, the risk mitigation have your the security team also probably need to sign off on it as well so it's always to err on the side of caution exactly well that is all I had to show the example is in the repository I'll build on it for the next part but hopefully this gives a sense of an introduction to vault the couple ways that you can use it on Kubernetes the different patterns that are offered and some of the caveats and tradeoffs to both it is a little complicated when you first start out but once you set it up and you get the pattern and you get the pattern together it's an easier way to sort of manage and issue credentials that you need to your application yeah, yeah and actually Florian I think this is a great question because I think this segues on to onto this next episode so this is a multi-part series right and so the this is the first part so when I first asked wait you're on the side when I first asked Rosemary to come on she's like well I'm gonna need actually multiple times because it's kind of a complex situation so we decided just to do kind of like a baseline vault that's going to be an introduction to this episode she's actually going to come next time so part two, stay tuned for part two where we actually talk about specific implementations with getoffs and specifically with Argo CD so that's going to be a fun one so hold on to your question Florian come not next week but the week after next episode we'll be on talking specifically about Argo CD and Vault so that's going to be that's going to be cool thank you everyone for staying on this one ran a little bit long but I told Rosemary hey we don't have a hard stop so take as long as you need to at least have the baseline information I know a lot of people are excited about you coming on HashiCorp talking about this with getoff so yeah you can find me on Twitter right so you'll see down here somewhere where you can find me please like, subscribe all that cool things so everyone take care and remember as I always say if it's not in git it's just a rumor and thank you everyone thank you everyone for joining