 All right, everyone, we're going to go ahead and get started. So I'd like to thank everyone for joining us today. Welcome to the CNCF's webinar, Securing Service Mesh with Kubernetes, Console, and Vault. Google and the CNCF ambassador. And I'll be moderating today's webinar. And we would like to welcome our presenters today, Nicole Hubbard, developer advocate at HashiCore, and Justin Wisseg. Sorry, I don't have your name, but you can fix it later. Technical Product Marketing Manager at HashiCore. Before we get started, there are a few housekeeping items. During the webinar, you are not able to talk as an attendee. But there is a Q&A box at the bottom of your screen. Please feel free to drop questions in there, and we will get to as many as we can in the time that we have. This is an official webinar of the CNCF. And as such, it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and the presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that, I'll hand it over to Nicole and Justin to kick off today's presentation. Awesome, thank you for that introduction. Let's go ahead and start sharing. So today we're going to be talking about securing your Kubernetes applications using console and vault. And so some of the questions I get asked the most often when I'm talking to customers are, what's a service mesh? And why should I use a service mesh? So let's start there. And so to really understand that, let's look at a history of how we've evolved to the state that we're in today. And so when we started off years ago, more or less now, we were deploying our applications. We would deploy monoliths to web servers. They would have firewalls that would limit the traffic to the databases. And all of your traffic coming in would flow through a simple load balancer that would then hit all your web servers. This made securing your applications a lot simpler because you only had a handful of servers running. They only ran your single application. And that was the only thing that ran on those. So we had monoliths. And as we've continued to move into microservices and Kubernetes, we now have a new look for our data center. We might still have some of our old applications running still in that same setup where we've got web servers and our database server. But now we've got potentially additional data centers or cloud providers where we're running Kubernetes. And they have a requirement for database access or key value stores, as well as talking to the old applications as well. And so how we secure those has to look different than what we used to use for securing the network of our web applications running on virtual machines or physical hosts. And so as we've started to move into Kubernetes, we start to see a couple of challenges that you have to solve, which are service discovery, service segmentation, and service configuration. Kubernetes helps a lot with this. Service discovery is solved with the services that you can define in Kubernetes and it provides discovery through DNS. You can also use network policies to do service segmentation within your cluster to limit applications from communicating, as well as config maps and secrets within Kubernetes or your configuration. But those are limited to your single Kubernetes cluster. So what happens as you need to scale or you need to be able to have service discovery between services running outside of your Kubernetes cluster. And so that's where a service mesh can come in. And so let's take a look at console service mesh and how that can help solve some of these problems. And so before we dive too deep though, let's take a look at how the service mesh works. And so one of the core components of it is the certificate authority. And by default, console can run its own certificate authority or you can actually integrate it with Vault so that Vault can act as your certificate authority and all of your certificates actually come from your Vault instance. And so what happens here is when your console servers start, every single client or application that you're running in your service mesh has to have a certificate to communicate with other services. These are standard X509 certificates, the same thing that you use for securing your web traffic. So these provide both TLS for encryption as well as since they're all coming from a single certificate authority, we can use them to provide identity information so that we know that service A is service A and service B is service B based on the information that's in that certificate because we trust the CA. We are also able to automatically generate these and rotate them for our clients using sidecars and other methods to get those certificates in place. And we'll go into that a little later. And so the next piece is there's two components to the console service mesh. You have the control plane, which is console, it's running here, we've got console servers running and we've got our console client running on each of our two nodes that we're showing in this cluster. And then you have your data plane. The data plane is going to be represented with Envoy for the demo today and Envoy acts as a sidecar proxy for us. And it's responsible for actually routing the traffic between our services. It will talk to our control plane to get the information, but it will cache that information and store it locally so that the control plane is not in the path for your communication as it's happening. One sec, once your connections are established. So another piece is your service access graph. And this is all about intentions and what services can communicate with one another. And so very similar to network policies where you can create policies that say these labels can talk with services that have these labels. So that you can limit communications between pods within Kubernetes. Here in console, you can actually create an intention that will allow traffic or deny traffic based on the service itself rather than a set of labels. And so here we're able to say that our web service is not able to communicate with any other services. And then we're able to create a specific rule to allow our web service to communicate with our database. And so these are for those services and these will actually take effect for those services no matter where they're running. So if they're running in your Kubernetes cluster, they'll be applied. If they're running on virtual machines or a separate Kubernetes cluster, all of these intentions follow their application and your services no matter where they're running within your service mesh. And you can also manage these through the UI of console. And so through the UI, you can actually see all of your intentions in one place if you prefer user interfaces. Now, how do we actually integrate this with our applications? And so for Kubernetes, we use a sidecar proxy approach. And so what happens is you'll deploy your application with no real changes needed in the vast majority of cases. The only thing you have to change is the endpoint that you're gonna communicate with. Instead of talking directly to the database in this case, you would talk to a port on localhost that is the proxy that's running here for your web service and your traffic will flow into there. And then that proxy will route all of your traffic from there for you and eventually get it to the database securely. And we'll walk through what that looks like. Now to do this, the only thing you have to add is an annotation. So once you add this annotation of console.hashiCorp.com slash connect inject, this will inject all of that sidecar proxy for you. And this is, there's a mutating webhook that's deployed when you deploy the console home chart that will add everything needed to make this and wire it up. And so how does this traffic look? So here we've got our web server running, we've got our proxy, these are all, and on that same node, there's a console client. So console clients deployed into the Kubernetes cluster as a daemon set. And then on our other node, we have our database with a sidecar proxy and the console client for that node. The console server can be running inside the Kubernetes cluster or outside of it. It doesn't matter for the purpose of getting this working. And so when the web server wants to communicate with the database, it opens the connection to the proxy server now instead of the database. And the proxy server will ask the local console client, hey, where's the database using the service discovery component we talked about earlier? And the console client will return that database and then the proxy will open that TLS connection to the proxy for the database. And then from there, that proxy will validate that the database is allowed to communicate or that the web server is allowed to communicate with the database. And since we created that intention earlier, it is authorized and so that TLS connection is established. In the case where that connection isn't authorized, that connection is immediately terminated and reset. And so once you deploy this, what are the benefits that you can get beyond the additional Kubernetes components with services and config maps and network policies? And so those start to come around. One of those is mesh gateways. And so when you're deploying in a single Kubernetes cluster, some of these won't be as beneficial for you. You can leverage services, you can leverage network policies. But as you start to go into multiple Kubernetes follows clusters or you even have VMs still running applications or you're on multiple cloud providers or any scenario where you have your applications running and more than just a single Kubernetes cluster, that's where these things start to become very valuable. And so another thing that the mesh gateway can help you with is if you've got two Kubernetes clusters deployed and they're actually using the same IP space or you've got sets of VMs running or any network segments where they're the same overlapping IPs. So in this case on the left, we've got data center one running our API and web services and they're using the 10.8.1.1 IP address. And then we've got our database two running in a completely separate data center, sorry, the database in a separate data center using 10.8.1.2 for the IP. And this allows so connecting these networks using VLAN peering or VPN gets very complicated because you have to do some special address translations so that the services in one subnet can actually talk to the IPs in the other subnet. So you have to map IPs and it gets really complicated and hard to solve. And so you can actually deploy a connect mesh gateway and these will sit at the edge of your data center and they will route the traffic over the public internet or over private connection however you want. And that connection is secured and all of that traffic gets natted as it leaves your mesh gateway and goes into the next mesh gateway so that those services are now able to communicate without having to re-IP your networks or deal with the overlapping IPs. You don't even have to think about them in this case. And so these mesh gateways are built using Envoy. They sit there at the edge and they route all of that and they also use those same certificates that we talked about earlier. So you get all of the TLS encryption and identity between the mesh gateways even though those are going across the open internet potentially. And so another advantage you can get is layer seven traffic management. And so by default with your traffic as it comes into your Kubernetes cluster or even between services you make a request to say web.service.console that will get forwarded to one of your three instances of your web service. But with layer seven traffic management you can now add HTTP routing, traffic splitting or custom resolution. And so we can do things like this here where a request comes in to web.service.console but it ends with slash API. And so now we're able to actually change that routing and say that anything that's for slash API instead of routing that to our web service we're gonna route that to our API service. And then for our traffic splitting we're gonna say we can define subsets. And so these are different versions or different groupings of our applications. And so here we can actually do splitting where 80% of our traffic goes to version one of our API and 20% of the traffic goes to version two. And then for the custom resolution we actually define here how do we figure out what that subset V2 is or subset canary. And so we define a selector just kind of like Kubernetes labels based on the metadata of your console service. And so in this case we define metadata version equals two and that's how we define version two for our API. So in addition to all of this layer seven and networking components now, what about secrets? How do we manage those? Kubernetes has built in secrets but what else can we do? And how can we automate and store these certificates that we need to generate all in a single place as well? And so that's where vault comes into play. So vault focuses on secret management and data encryption as well as access to all of this. And so we can take a look here and we can see that vault, it integrates the main thing vault focuses on or one of the main things vault focuses on is identity brokering. And so it's identifying the client that's talking to the vault server and controlling the access that that client is allowed as well as what secrets they're allowed to get to. And so we can integrate with a lot of different services for how we actually do that identity validation. And then it also provides a single control plane for all of our cloud security. And so it has numerous integrations with providers and tools that you're already using. So you can see some examples here, Kubernetes, OpenStack, we can integrate with databases as well as integrating with the cloud providers you're using as well or Octa if you have SSL. And so how does vault work? So you've got clients, you've got secrets and there's the authentication piece. So let's kind of walk through this real quick. So as a client, I'm gonna make a request to the vault API and then I have to authenticate. So this gets passed back to a backend. It could be I passed credentials or it could be that vault is redirecting me to Octa or active directory logins or whatever you're using for identity, you'll then authenticate. There's also integrations for Kubernetes services so that your service account tokens within Kubernetes can validate and authenticate against the vault API and that identity from Kubernetes comes with those. And then, so these are all passed in and then we've got the secrets piece which is now that I've authenticated I want to be able to get secrets. And so these can go out to one-time passwords, key value store passwords where we're just storing that the password is X value as well as going all the way out to integrations with for example, AWS where you can request credentials for AWS and you'll be, and those credentials will be created specifically for you with the permissions you have and return to you. And then you can put timeouts on those so that they're only good for example for six hours or a work day or whatever period of time you decide. And so you're able to ensure that credentials are being rotated properly. So let's take a look at what all of this actually looks like in practice. So I'll hand it off to Justin now. Hey Justin, you're on here. Perfect. Hey, awesome, thanks Cody. Hey, awesome, thanks Nicole. So I'm just sharing my screen here. And then, so for the demos today I sort of want to go hands on and we'll jump over to the command line and actually walk through what this looks like in real life. So just to sort of give you a high level of what we're going to look at I just wanted to chat about the slide here called the Sidecar Proxy. So we're going to set up a web app that's basically just a web server and then a database server. All it's going to do is increment the counter. And for the demos today I thought we'd probably run this in Kubernetes because I wanted to sort of show you how easy it is to get up and running with Vault and Console. So we're going to have a web app and it's going to connect over to the database. And behind the scenes, we're going to wire it up so that Console is intercepting the traffic with the proxies. And then we can sort of set these intentions that say, hey, you know what? This web app is only allowed to talk to this database or what happens if we say interrupt that traffic and say, you know what? This web app can't talk to that database. We'll just sort of cover some of the sort of behind the scenes kind of cool stuff that you can do. Another thing that we're going to demonstrate is injecting a secret from Vault into the web app. So as Nicole mentioned, Vault is a service that sits on your network that acts as sort of a central place that you can store all your secrets. Vault goes way beyond just secrets. We have tons of integrations with all the cloud providers, all sorts of open LDAP, all sorts of databases. Say MySQL, if you wanted to do, say, credential rotation, you can have Vault connected to database and rotate a credential. So Vault's sort of a Swiss RUNI for secrets stuff. But I'm just going to show you a very brief summary of it today. So I just wanted to give you a high level of what the design looks like or architecture overview of what we're going to build out. And then we'll go and build this piece by piece. So today, I'm going to demo this on GKE. So we have a kube cluster here. And then we're going to go through and install this. So I'm just going to flip over to the console so you can see this. And then kube.tl. So you can see I have my three node cluster. Full disclosure here. I'm just going to copy and paste the commands over because it goes quicker and you don't have to watch me type. So in the top panel here, I have my kube cluster. And then in the bottom panel here, I just have a while loop going that watches what pods are running within this cluster. This will just sort of give you a view of how things are changing as I'm going through the demo. So first thing I'm going to do is change into my namespaces. I might have already done this. Yeah, I did. Perfect. Cool. So one cool thing that we did is, so we have two home charts, one for Vault and one for a console. And what you had to do before was we had them sitting in repos. And you had to go and clone the repo and run Helm on your local file system. But we had great feedback from the community to say, hey, why aren't you integrating with the official Helm registry? So we've gone ahead and done that. And I'll show you two examples where you can find more information. But if we just add the repo here, and then we can search the repo and you should see both our Helm charts here. So we have one for a console and one for Vault. And then you can just do how to install the various chart. But for the console piece here, I wanted to show you that piece where we're running the sidecar injection aspect. So what I have here is my app back end. This is the database counter. The front end, this is our front end app that's going to connect over to the database. I have a patch here. I'm going to show you something cool that you can do with Vault, where you can inject secrets without having the app know anything about Vault. And then we have our Helm console config file here. So let's just take a look at this. The piece that I wanted to highlight here is this console connect piece. This will allow us to allow and deny wiring up of what apps can talk to, how our web app can chat over to our database, and how we can interrupt that connection if we want to. So that's kind of what that is. And then actually, let me just cancel this here. So we're doing Helm install. We've got the config file. And then we're using our official registry, which should greatly simplify if you want to get fired up with this. So we'll get that going. And then down in the bottom panel here, you should start to see, hey, console's firing up. While that's going, I'm going to get Vault also installed. So I'm saying, hey, Helm install Vault. And I'm actually setting this additional flag here that says, hey, I want to run the server in Dev mode. So with Vault, I should probably mention that if you're not familiar with console or Vault, they're open source. And then there's also enterprise offerings too. Enterprises obviously will support you. And there's a lot of added benefits to the enterprise features. But the open source offerings are very feature complete. And there's tons of people that are using them. So everything I'm going to show you today is available in the open source versions. So in Dev mode here, this basically sets up a quick server without a lot of the additional security checks that you'd run in production. So if you just want to kick the tires with Vault, that's why I'm enabling that Dev mode. Now you can see Vault's up and running. And we have this sidecar injector piece. This is specific to Kubernetes. So what we found in chatting with both open source users and enterprise users is they say, hey, I have a Vault cluster. And we've put all our, we sort of centralized all our secrets in it so that we could get like auditing, logging, and credential rotation, all this kind of cool workflow stuff. But we don't necessarily want to modify our apps to take advantage of Vault. Is there some way that you could like maybe inject a secret right into the file system? So that's what this injector piece does. And we're using the mutating admission web hook component of Kubernetes to take advantage of that. So I'll sort of highlight that in a second here. You might actually be thinking, hey, why aren't you just using Kubernetes secrets? And that's a really good question. And the reality is for a simple web app like this, you probably could be or you maybe should be. And that there's overhead to using Vault. Vault is really meant for like many apps. So if you just have one web app, it sort of depends on the use case that you'd want to go after. But if you're just injecting a simple secret, you could use the Kubernetes piece. But Vault really shines when you have additional use cases or when you're running lots of different apps. So Vault can do encryption, all these advanced secret dynamic rotation stuff. So once you start getting into more of these advanced use cases and you see the secret sprawl sort of going out, that's where Vault really shines. And it's going to be very cumbersome to sort of use sort of just static secrets. Also, one cool thing I'll show you here with Vault is it allows you to do policy. So you can say, hey, you know what? This particular pod is only allowed to grab these particular secrets. So once you get into sort of complex use cases like that or things where you want more granular sort of access control, that's where Vault really is cool. All right, so let's log into Vault. What I'm going to do is I'm just going to set up Vault to do the sidecar injection. I'd rather just show you what's happening under the hood here so it doesn't seem like magic, but especially if you want to sort of get going on your own. So I'm going to set up a policy that says Vault has this concept of there's secrets and then there's policies that are allowed to sort of access these secrets. So I'm just setting up a super generic policy that says, hey, you know what? I'm allowing access to any secrets with the capability of reading. And so Vault sort of operates on the idea of paths where you say, hey, secrets slash app name slash whatever your secret is, it's totally up to you to find out the path structure that you want but I'm just showing you this. All right, so let's add this policy. I'm reading it under just app. I'll show you this in a sec. And then what I want to do is I want to enable Vault to communicate with Kubernetes. So right now Vault is running on Kubernetes but it doesn't have a way to like wire into Kubernetes to say, hey, this particular pod is asking for a secret. Are they allowed to? What's the policy? You don't want a pod just to fire up and be able to pull down all your secrets, right? So we're going to enable the Kubernetes plugin and then I'm going to wire up authentication between Vault and Kubernetes. And then finally I'm going to set up, maybe I'll just walk through this. So I'm setting up that, hey, my app, this is going to be my web app, you know, app front end. I wanted to access this particular policy. So I'm saying, hey, you know what? When this app fires up, I only wanted to access that policy where I can download those particular secrets. This allows you to super fine grain control of what apps are able to access what secrets and you can go back in time to say Vault logging or auditing to say, hey, one app is using one secret. How often are they doing it? What's the rotation schedules? All that kind of like cool stuff, more advanced features that you're going to need once, you know, your apps are taken off. And then so we got all the wiring up set up behind the scenes. We just need to configure a secret. So I have a little secret here called hello world with the username CNCF and the password CNCF demo. Right. So the goal here is we're going to fire up a web app and then I want to inject this secret into the web app without it knowing anything about both. Right. So that piece is done. So now let's take a look at our web app. We'll notice in here, there's nothing special. We're not, the pod definition has nothing about, it doesn't know anything about console. It doesn't know anything about Vault. There is an annotation in here that says, hey, you know, inject the console sidecar, but that's it. We're not, you don't have to do anything complex in here. So let me just apply this. Then down here you should see our app front end fire up. And then once that's up and running, what I'm going to do is how Vault will work will, it'll inject secrets into a particular path into the file system. I should mention this is, sidecar injection is totally unique to Kubernetes, right? Typically how you interact with Vault is you'd have, you'd modify your application to include say, like our SDK or interact with our API where you're saying, hey, hey Vault, before I connect to this database, what's the username and password or something like that, right? But this way, if you're using a say a 12 factor app or something like that, we can inject the secret and then you can use something like console template to take those secrets and push them into environment variables or something like that. All right, so I'm just going to connect into this app. So basically I'm going to exec into this pod and I'm going to say, hey, list the contents of this directory. This is obviously going to fail because we haven't injected any secrets yet. Yeah, you can say, hey, there's no such file. But what I wanted to show you is this piece here, this patch. So what this patch does is it says, hey, I'm patching some annotations in that pod definition that say, hey, no, I want to inject a secret. This is the secret I want to pull down. You know, secret hello world. And behind the scenes, what we've done is we've wired it up that this app front end is only allowed to download those particular secrets based on the role of this particular application. This really protects you saying like, hey, you fire up a pod and it goes and downloads all your secrets. Obviously, you don't want that, right? And then on the vault side, you know, all this is audited and logged saying, hey, you know what? My app front end pulled down this secret at this particular time. So one cool thing we're doing here is we're saying, hey, you know, obviously I want to pull down this secret, the hello world secret, but I want to set it up into a particular string. You know, so I'm giving a Postgres string here, connection string and I'm saying, hey, give me the username and give me the password. And then I want to output that to the file system. And then my app is going to go by, pick up this connection string and connect over to our database. All right, so now that we've seen that, let's go ahead and deploy this patch. And so once I run this command, what we're going to see here is we're going to see this app, an additional pod Vira called app front end, and it's going to have both an init container that's with vault. So vault will inject the secret before your app fires up. This just protects it from saying, hey, if your app fires up and it doesn't see the secrets, if it crashes or something like that, obviously you don't want that, right? So your app's going to fire up. Vault will inject the secret before your app starts, and then vault will have a little sidecar container sitting there that will periodically check to make sure that secret's up to date and inject the new one if, say it changes. This really comes into sort of those more advanced use cases I was chatting about in that vault supports dynamic secrets. So you can have a secret rotate, say every 30 minutes or something like that. And vault will sync over that password or the secret data into your container. Cool, so let's patch this and then we'll see what happens. Cool, so we see our new pod firing up. You can see that the containers numbers is different because we have our vault sidecar container in there now, right? So our old app is terminating. And then what we'll do is we'll go ahead and we'll cat the secret file. So I'm just going to exec into the front end container again and then I'm going to basically cat the contents or list the contents of this file. And if everything works, which it just did, you can see our Postgres connection string. So you see our username and our password worked properly with our template and it pushed that to the file system. Now our app is able to go check that file and pull that in and use it to connect to a database. So one thing I should probably mention is that I want to actually show you what this looks like in web browser. So I have a couple of tabs here and I'm just going to wire up. You know, I want to connect into the console UI just to show you what that looks like. So I'm going to do port forwarding there and then on the app side, I also want to show you what that looks like. So the web app. We'll notice here that I haven't configured the back end yet. Right, so our web app is basically you're going to be connecting to nothing. But once we fire up our back end, we'll see what that looks like. So now with the port forwarding going, I'm just going to flip over to my web browser. Hopefully you can see that. I have these links set up. Cool, so you can see we have our services. We have console running. We have our app front end and then we have the console side card proxy. So everything's looking good so far. I'll just sort of zip through the various tabs there. So you can see our nodes. You know, we have our three healthy GKE nodes. You know, with console, you can have like key value store. So say you had some particular data that you wanted in your cluster, something like that. You can throw that in there. You can set up ACLs. And then we can also set up intentions. Intentions is basically what I'm going to use to demonstrate the, hey, you know, can we block traffic or can we allow traffic between our web app and our database back end? Cool, and then let's jump over to this tab and I'm just going to show you the web app. So the web app is super simple. All it is is just a counter when it connects to the back end. It increments the counter and that's it. You can see, hey, you know, accounting services unavailable right now. So once we fire up the back end, this should flip over to green and then we should see an incrementing counter. This is just a web socket. So it should probably come off as soon as we do that. So let's switch back to the console and we'll fire up that web app or that back end. Maybe I'll just show you what it looks like back end. You'll notice in here, there's like nothing special, right? We're injecting the console sidecar, but that's it. So you'll notice, you know, maybe in the YAML here, yeah, we do mention the annotations around console involved, but these apps, they have no idea about vault or console. You know, it's happening transparently to them behind the scenes. This is sort of one of the really cool benefits of this is that, you know, you don't have to go and refactor or rebuild your apps. Obviously you can, if you want to include our SDKs, you know, maybe you can take advantage of advanced features and stuff like that, but yeah, it's pretty cool. All right, so let's go ahead and fire off our back end. All right, so you can see the back end coming up. We'll just wait till it's ready. And then I'm gonna flip over to the web browser here and we'll reload this. Awesome, so you can see, hey, it's connected now and you know, it's using our back end. You know, if I just hit refresh a bunch of times, you can see the counter goes up, which is awesome. That's pretty much it for my demo. So, you know, I think we have a good chunk of time for the Q&A period. So what I'm gonna do is I'm gonna stop my sharing and then maybe we'll go over to the call and we also have Cody from our technical marketing group here, you know, we're colleagues and we're happy to answer any questions. Also, we have a slide that shows our resources of, you know, basically how you can get going with console and Kubernetes, you know, if you wanna go and install the home chart, basically here's step-by-steps on what you need to do. One sort of hidden gem that I really wanted to share with you is our learn site, so learn-corp.com. It's just packed with cool hands-on labs. So, hey, you know, I don't know anything about console, how do I just get started? Or, you know, hey, I'm a more advanced user, how do I maybe get going with it? You know, we have a whole bunch of like just loaded content here and then you click in here, you know, you can see sort of the hands-on lab stuff. The same thing goes for Vault. You know, we have a home chart documentation and same thing with the Vault learn site. So learn-corp.com slash Vault and, you know, it's just awesome, you know, I work here and I go into these labs all the time to like find stuff out, so it's just an amazing resource no matter if you're like totally new or you use the product all the time. So I'm going to shut this off and I will do the Q&A period. Hey, Justin, just a heads up. There was a question that came in, I think that you would probably just answer and the question was around, what use cases would you use Vault over public cloud secrets management? I know you've run into this a few times, so you don't mind answering that one, that's cool. Yeah, 100%. So the one main thing about this is, you know, if you're totally dedicated to a particular cloud provider and you're heavily embedded with them and you have like simple use cases, I'd say, you know, why wouldn't you use that cloud provider? On the flip side is where it gets tricky is once you start going down that path is, I'd say a lot of companies aren't that simple in that you maybe have on, especially enterprise companies, right? You have on-prem, maybe you have lots of data centers. What if you're using, you know, one cloud and then you have, oh, I want to back up to another cloud just in case, you know, for DR preferences or something like that. That's where all of a sudden, boom, complexity just totally goes off the charts in that, you know, your developers have a workflow for interacting with secrets on one cloud provider. Now they have to add a bunch of exceptions for working on this other cloud provider. Never mind if, hey, they have to do an on-prem. Sort of the hidden gem of Vault is the sort of seamless workflow across all of this in that if you use Vault, you can have a common workflow, a developer workflow that goes across all these various clouds in that, you know, Vault sort of acts as an abstraction layer, right? You can put all your secrets into Vault and then Vault has plugins that will go and interact with all these various cloud providers. So that's sort of like the, not only from like a developer workflow, but just from, you know, a security practice standpoint, I don't have a DevOps team that has to go and secure this particular cloud with secret manager. I don't have to go and secure on-prem. So you have like secrets literate all throughout source code, config files, developer laptops, all this kind of stuff, right? So, you know, I know it's a roundabout answer and it really depends on your use case, but as soon as you go beyond one cloud and simple use cases, that's where Vault really shines. So just to like understand a little, like to summarize there, like I think that what you're saying is around, like it can be a workflow problem, right? If you're looking for consistency and how you use these secrets between like multiple environments, Vault shines really well for that. If you're already like, as an example, if you're an AWS user and you're just deep in AWS management already, like doesn't necessarily make sense to rip that out and drop Vault in, but if you wanna be able to have Azure, AWS, GCP, vSphere have a consistent, like consistent secrets platform between them, Vault kind of becomes like a really good way to go. Is that accurate? Honestly, I wouldn't overshoot on the complexity side. It's like, you know, if you're a small startup and you just have, you know, a few apps and you're running an AWS, why wouldn't you use, you know, the AWS secret manager? Or, you know, you're probably just hard coding it to be 100% realistic. You're probably not even using the secret manager. Vault really shines when you wanna have like auditing, logging, more advanced stuff, like dynamic credentials, you know, that are short-lived that have a TTL. Vault just handles it all for you behind the scenes. So you don't have to be, let us sort of handle like the secret side and you can focus on your app as sort of the, I'd say pitch I'd give there, but a lot of companies aren't using 10 clouds, like realistically, right? They're maybe using on-prem and they're using a cloud provider or maybe they're totally born in the cloud and they wanna have another cloud provider as a backup, right? So Vault is gonna save you so much time even in just simple use cases like that where I have one cloud and I have another cloud provider in that, you know, you're not duplicating workflows, you're not duplicating how do I secure it over here and over here, Vault, you just use Vault as an abstraction layer and it'll handle it all for you. Awesome. Another question for you that came through was around, is there a way to increment usage limit count for wrapping tokens? It looks like when you set a limit for the way the tokens are wrapped, can you increase that limit? I'm not sure I actually understand like EU's case there. You know, maybe we can take, I'm gonna have to probably skip that question because I don't know enough about it. Actually, I guess I can open up the Q&A panel here too. Sorry, I know there's a bit of a lag because, you know, I'm reading the questions here. There's a few on console there, Cody. Yeah, I know that I believe that Blake is actually answering the one about traffic up in a kitchen, so. Okay, perfect. So there's a question in here about, hey, my developers have changing environments. I have to give them access to those instances. How can I manage that with Vault? There's probably multiple things here happening here and this is probably specific to your particular environment or something like that, but I could probably see two things happening. One is, hey, I want to produce an infrastructure. That is probably like an infrastructure automation thing that you wouldn't necessarily handle with Vault, but maybe you'd handle through Terraform or something like that. For the secret side of things, hey, I have a bunch of secrets and I want to share them with a particular set of developers. Vault has a user interface. Vault also has like a command line API. You can use it basically however you want to interact with it. If you basically put your passwords inside of it, you can set up policies to say, hey, you know what, these particular users can access these particular set of secrets. So from the secrets aspect, that's easy to do with Vault, but it's probably there's probably an infrastructure component to the question here that I need to understand more of them. I would say that's similar. I know there's a question from Daniel around the data plane like the complexity for managing service mesh. I need to understand a little bit more about that, about what's being asked there to understand it. There's certainly aspects that can be complex initially and at least getting started, but especially in Kubernetes, a lot of the interaction is taken care of through the annotations that Justin showed in his demo. As far as saying that it goes, when you move out of Kubernetes, there's a little bit of a different workflow creating with the service files and registering services, but oftentimes that can be automated through the API or through the way services register as they come up on virtual machines. It's much easier on Kubernetes to honestly get started with the registration of services because the sidecar model kind of takes care of all of that for you. As far as a console service mesh complexity standpoint, on Kubernetes, it's actually very simple to consume and get started with. I don't see a lot of complexity there. It gets a little bit more, I don't want to say it's more complex, it's just a little bit of a different workflow when you start interacting with virtual machines versus what's in Kubernetes. So maybe Daniel, if I don't answer that very well, feel free to follow up with more question in the chat, but just need a little bit more unpack to understand. So there's a question here about, hey, I have four different environments, prod, QA, dev, et cetera. I have a few different data centers and how basically does this work? So I'll speak to the vault side because I'm most obviously familiar with that one, but you can have multiple clusters for each of these environments, but you can also do say, maybe I'll just focus on sort of the prod use case. If you have multiple data centers and you want to run vault in prod, I should probably have mentioned in the demos there that console and vault are just binaries. You can install them without using Kubernetes, it's not Kubernetes specific or anything, but obviously we have the Helm charts that make this super useful if you want to do that. On the vault side, if you're running vault in production across multiple data centers, that's sort of where vault enterprise would come into play in that you set up a vault cluster and then you can have replication where you basically replicate those secrets over to another cluster. So that's, if you just type in a vault, HADR or anything like that into Google, you'll find it right away. Awesome. Answering the other two that are in here, the question around routing on void base, and there I saw you came on on webcam, Blake, so feel free to jump in also if you have feedback. We can do some of the later stuff that Nicole presented. We can use layer seven to do or layer seven routing to route based on header value and we can do some interesting traffic splitting or if you wanted to be able to do canary testing and send specific traffic to a new instance based on header coming in or even based on device type as just random examples. Those are all things that are possible and a lot of that's because we're integrating so tightly with on void in that way. So that's definitely possible, definitely stuff you can do it and things that we would really encourage. You'll start to see a lot more about that coming out around console and in our conversations, especially now that we've created our own ingress gateway to speed up getting started on that. Those are things that we really want people to be able to do. So it's definitely in the product now. You can do that today. Absolutely. Is that anything to add, Blake? Yeah, we just had, we said report that advanced routing for both HTTP and GRPC traffic. So if you head over to the console website and look at the L7 routing section, there's a whole set of documentation there on what we support. Answering the next one back to our front Christopher. Intentions can span multiple instances. So in console, we refer to this as federation. So if you take multiple console clusters and you federate them together into that one consistent service mesh between environments, you can have those intentions span across that. So the intention looks like front end can talk to API, but console itself understands, okay, I'm going to send this over a mesh gateway. I'm going to send this traffic to the endpoint, but that intention is still kept in place, that security that NTLS encryption has kept in place and the intentional has it. So the intention is actually fairly simple. The upstream creation, so like when Justin showed in the Kubernetes YAML file, you do end up specifying that secondary data center. So you say like, API, data center two for that stream and the end of the port, but that's pretty straightforward. The intention is just here goes to there, right? If console understands how to do that, how to do that translation downstream. So just to add to that a little bit, I think the question was around, can it span multiple service instances? So in console, when you register a service, you're registering a service that could have multiple instances of that service. So maybe you have an API tier, you have multiple API workers, those would all just be represented in console as single API. So the intention would apply to that group. You don't have to individually reference those individual instances. I'll take the next one here on, how can I ensure a secret fetched by a vault application doesn't get leaked to any other application? So I just wanted to share my screen for a second. So in this blog post, basically we talk about, the sidecar injection that demo that I sort of walked through. There's a video, maybe if you just, you can easily Google this up and read through it yourself, but I just wanted to show you sort of the piece that makes this happen is, hey, you know what I'm in that annotation piece, I'm saying, hey, you know what? I want to use this role when I'm downloading the secret. And this role basically links back to a Kubernetes service account that links back to a vault policy. So there's sort of a chain of secret custody, if you will, or something like that. As you're asking from the pod into Kubernetes, into vault, hey, how do I actually grab this secret? And so we're hitting that policy that says, yeah, this service account is allowed to read these secrets. That's how we sort of enforce that, hey, this particular pod can't download all the secrets or something like that. But there's sort of a larger thing happening here in that, yeah, we're injecting this secret into the vault system. If we're making some assumptions here that, hey, you know what's running inside your pods and you know what's sort of accessing this particular secret, but there's sort of a higher level thing here in that we want to give you a spectrum of sort of use cases that you can use with vault. We know that, hey, this isn't maybe the best option for your particular use case, right? So we want to give you access to the API. We want to give you access to injecting secrets. We want to give you like a variety of options that sort of fit both your security requirements also sort of your use cases. So obviously the most secure way of doing it is connecting directly with a vault API in that you can have basically a direct path, but yeah, we want to give you a variety of options. So if you just check out that blog, it walks you through the sort of chain of how things are connected. But this ensures that a particular app can't sit down with all your secrets or something like that. Just a heads up to everyone. I don't see any more questions in there. So if you have anything else, definitely pop it in before we finish off. Justin and Nicole both said this, but something to call out that I think is really, really interesting and cool about all of this is that most of this you can go and do right now, right? You can go pull this stuff down and give it a try. You don't have to buy anything. Like it's all open source software, right? Are there some like really advanced features about like scaling the enterprise that are behind the license? Sure, but significant portions, all of the stuff that was demoed today is fully open source. So you could literally go pull down these binaries and run them on your laptop right now and do most of the stuff that you've seen here, whether it's in Kubernetes or not. A lot of that functionality is already there. So like I would highly encourage people to go in to go and do that. How short tools have a great way of doing the what we call what we call Dev mode where you just run it locally on your system. So I'd highly encourage anybody who wants to even check the stuff out, check out the UI, click around, pull down the binaries and run them. It's really, really easy. And a lot of the learning guides are based on being able to just run locally on the fly. Yeah, 100%. And there's a massive community behind it too, right? If you run into a problem or something like that and you go to Google, hey, this error message, chances are you're gonna find either one of our learn guides or the forums or GitHub or something like that where someone's already run into the same thing. So it's not like you're just going on this on your own or something like that. Yeah, so there was a question about are there plans to allow console to be configured via a Cates resources or an operator? It's definitely something that we're looking to do within the next release is to have CRDs to allow you to manage both console intentions and later seven routing configuration. I just sort of want to be cognizant of time. Yeah, I'm not sure if there's any other questions. Someone mentioned, hey, what's the pricing of Vault and console versus say like OSS versus Enterprise? You know, if you just Google OSS versus Enterprise, it'll come up right away. And there'll be sort of like a matrix of, hey, what's on the OSS side versus that enterprise. So super easy. And there are a couple more there, but I think that's all we have time for today. Reminder to everyone that this is going to be up on the CNCF website after this, cncf.io slash webinars. So if you want to refer back to it or if you want to ask the speaker's questions, I'm sure that they're open to that. Any contact information you all want to share for future questions? On mine, you can just find me on Twitter. Yes, I think all of us are, Blake's probably more active than any of us. Awesome. Blake over on Twitter. So thanks everyone for joining. And I already mentioned the recording. So thanks for the great presentation.