 Thanks everybody. Thanks for joining. I'm going to be talking about building apps and Kubernetes. And we've got a few demos to run through. So I'm going to try not spend too much time on slides. But I think it's important to acknowledge the importance of keeping credentials safe. The recent Twitter breach is just a great example where some folks got access to a slack, to an internal Slack channel where tool credentials were posted, actually pinned in a Slack channel. And that gave them the credentials they needed to do the bit of mischief that they did last week. We've seen similar with Tesla, where it was actually a Kubernetes console that had been configured to not require any credentials to access. And someone had put any of us access keys in Kubernetes and the hacker was able to access those through the browser, through the Kubernetes console, and see those secrets and then copy those secrets and use those. So the point is, how do we keep credentials safe? We can't actually build applications without secrets. But how do we keep those secrets out of the hands of people who would do mischief by them? We often see a lot of tension or at least some tension between the application development teams, the DevOps groups, the cloud teams that are trying to move quickly to deliver business value and the security teams who are trying to keep some controls in place. And this friction sometimes bears out in either development going off and doing their own thing, that kind of a rehash of the shadow IT that we've seen recur in different forms over the years, or the security team owns everything and becomes a bottleneck. And in neither cases is that a win-win scenario. What we really want to do is enable that DevOps collaboration between the security teams and development teams, enable developers to be secure as transparently as possible, relieve them of the reporting burdens that security is used to dealing with in terms of audit risk compliance, these types of things, and empower the security team to deliver a service or deliver a capability that developers won't find cumbersome and more importantly won't find a bottleneck or an impairment to their workflows. So this, you know, the whole good old shift left mentality is alive and well here. And this is what we're enabling folks to do. We want to make development secure, but we want to do it with the oversight of the security team. And so it is a shared responsibility. And we have, you know, sort of itemized these best practices. The first step is absolutely getting hard coded secrets out of your applications. Most people have gotten that memo and have at least taken steps to do that. But of course, as soon as you start removing secrets, you need a place to put those. And so there is a tendency to put secrets into the most at-hand place. You know, if you're in AWS, you might look at AWS Secrets Manager. If you're in Azure, you might look at Azure Keyville. If you're in Kubernetes, obviously you're going to look at Kubernetes secrets. So we call these security islands. They're little pockets of security. And they may be okay in and of themselves. But this audit risk compliance practice really requires some insight. The security team needs to understand how credentials are being used, how they're being secured. And it's very difficult when you have multiples of those islands. No, in this time of COVID and the time of very porous network boundaries, it's not been any more obvious that perimeters don't work anymore. So identities really are the only way of securing privilege. So we need to create identities for absolutely everything, especially anything is going to be accessing a sensitive system. We need to authenticate it strongly, limit its scope. So it's authorized to access only what it needs to, the principle at least privilege. And we want to eliminate this problem that I'll speak more to around secret zero. How do we secure that bootstrap credential that applications need to keep this all off? Credential rotation is a fundamental best practice for just ensuring that if a secret gets compromised, and you have to assume that they will be, you have to assume that credentials will at some point get compromised. Rotation effectively nukes that secret in such a way that whoever has it is no longer of any use. So aggressive regular rotation of secrets, especially secrets being used by applications is critical. But that also then says, how do we do that in such a way that applications aren't disrupted, that applications connectivity and ability to connect back in systems isn't impaired? Because the last thing we want is for application downtime to be triggered by credential, good credential management. So applications always have to be able to get their secrets. They always have to be able to connect back in systems. But we want to do all that securely. Spoken to this issue of security islands, but this is the current state we see most organizations in where they have homegrown solutions, or they're using capabilities of the tools of the platforms that they're running in to store secrets. We'll talk a fair bit here today about Kubernetes secrets, some of the challenges with that, some of the things that we can do that we can help with in terms of mitigating the risk of those. But this is the current state that we're seeing most organizations in. And we're coming to market as a security company and putting security first, but providing that. So I like to say we're a security company that gets DevOps. And so we're empowering the security team to do that governance risk compliance reporting, but not getting in the way of the development workflows. So we want everything to have an identity, whether it's a person or a process. We want strong authentication for all of that. We want to authorize with least privilege. So we're only granting access to what the identity needs and no more than that. And then we obviously want to audit everything. All activity needs to be audited so that if there is an issue, we are able to detect it as quickly as possible and certainly do that sort of postmortem analysis to understand what happened or what identity went rogue. The secret zero problem is unique to applications. So humans have a built-in vault where they can mostly remember their own passwords or at least answers to security questions. But the nonhumans, where does that bootstrap password go or that token or the serve or whatever that credential is? Where do you store that in such a way that the application can get it but nobody else can? How do you secure it but still leave it accessible to the application? We call this the secret zero problem. And we've devised ways around that. So this is often one of the first questions I get when I'm talking about this space. People have wrestled with this and it is a difficult problem. There's basically two ways to do authentication. One is credential-based. And in the human world, this is your passport or your driver's license. It's a thing that you have that vouches for your identity that says you are who you say you are. In the application world, we have API keys, we have tokens, we have certs. These are all things that have to be stored somewhere. And they create that secret zero problem. These credentials can be stolen and used to impersonate an identity. And so a stronger way of doing authentication is to use attributes that can be validated with a trusted authority. And so if we think about biometrics in the human world, if I have my fingerprints on record or if my retina scan is on record, then when I go to the airport and I go through the clear kiosk and I present my fingerprints, they can be compared to my fingerprints on record. And that's a much stronger way of authenticating myself. It's much harder to steal fingerprints. And so we want to use this same type of approach with applications. But we need a way to verify these attributes. So the idea is that we are going to allow list. We are going to pre-enroll or pre-define identities, along with the attributes that will be used to validate them. And then at runtime, when that request comes in, we can say, is this an identity we know? If it's not even on the list of allowed identities, we can reject that request outright. If it is on the allowed list, then we can call back to the platform to validate that identity. And this is the approach that we take in Kubernetes, as well as on the cloud platforms, and even with some tools where we can look at each of these as a trusted authority to understand and know what's running in it. And we can use the attributes of a Jenkins job, of a IAM role in AWS, the metadata job tokens in Azure, use those as platform attributes to validate these actors, these pods or these applications. So the flow, and I'm going to be specifically talking about the open source solution conjure. So cyber conjure is an open source vault for storing and retrieving secrets. It is available conjure.org. There's lots of content here, lots of good blog content, talking about the secret zero problem and various aspects of application or secrets management for applications. The APIs are well documented here. And we've got just a ton of content. We'll be referring back to that. So we're going to be talking about secrets management in the context of open source conjure. The workflow here is that you authenticate using some strategy. So we support multiple different strategies for different platforms and different use cases. However, authentication happens. Successful authentication results in us issuing a short-lived job token. This is a token that has an eight minute time to live. Basically, it's a bearer token that can be used to retrieve secrets. The secrets are retrieved based on authorization per policies. So we authenticate to validate the identity of the application. That identity is constrained to access only the things that it's been allowed to access. And assuming it makes a request for a secret that it has access to, it can retrieve that secret and use it. And that secret could be a certificate, it could be an SSH key, it could be a password, it could be a token. Basically any binary value that we want to use for credentials can be used to connect to these target systems, these back-end systems. At the end of eight minutes, though, that token will expire and the application has to re-authenticate. And this will play into some of the use cases that we'll be demoing here shortly. Because that access token, when it expires, basically you've lost access to secrets. And given that we want applications to always have access to their secrets, there's certain things that have to be done to address that. So to dig into Kubernetes authentication in the Conjure environment a little bit more. This is elaborating on that workflow that I talked through a minute ago. Basically the application identity is allow listed or white listed and it's defined or its attributes are defined in terms of the cluster and the namespace that it's running in. So we effectively give an identity to the cluster and of course namespaces are native in Kubernetes. And so these would be ways of validating an identity. Now this means that applications running in the same namespace would share the same identity. And sometimes you want to go more granular than that. So we also give you the ability to add a service account, a Kubernetes service account, as an attribute that can be validated for that identity. So the identity is just a friendly name. But these attributes are annotations on that identity that we can use to validate it at runtime. So the identity gets defined via policy, gets loaded into Conjure and defines that identity along with the attributes. At runtime a helper container running either as a sidecar or as an init container will do what is effectively a spiffy workflow. This is where the authenticator is going to format a certificate signing request. The ultimate goal is to create a mutual TLS connection with the server, with the Conjure server. The authenticator submits that certificate signing request with the attributes from the pod, metadata attributes from the pod that can be used to validate that pod with Kubernetes. So when that request comes in, Conjure will parse that CSR, call back to the Kubernetes API to validate those attributes. If those attributes are for an identity that's known and they check out with Kubernetes, then we will issue that access token. That access token then, well actually we will issue a cert and a private key that can be used for credentials for authentication using that mutual TLS protocol. And then that authentication gives us the access token. If you're familiar with spiffy, this is basically that same workflow. And in fact, the certificate that is issued, the credentials that are issued here contain a spiffy estimate. So we're very bullish on spiffy and the whole idea of defining identities for workloads, not for infrastructure. We want to authenticate workloads, not the infrastructure that they're running on. Spiffy is part of the CNCF framework, under the umbrella of CNCF. And they're doing really great work around how do you establish identities, strong identities and strong authentication for applications. So we're basically using that workflow where the authenticator is the client and the other party is the Conjure server and using that spiffy workflow to create a spiffy SVID, a spiffy verifiable identity document, which is that 509 cert. So that's a bit about that. So we're on to the demos now, which I think is the more interesting part of any presentation. Feel free to ask any questions. If anything wasn't clear, if anything I went over, we're basically going to go through some examples of how authentication works in various ways of retrieving secrets. We've got several different demos here, call them labs. This is actually set up to be a multi-user lab if anyone ever wanted to run a clinic or attend one of our workshops. And we're going to walk through several different ways of retrieving secrets that are supported by Conjure. So sometimes people just want an API and a lot of times developers are just saying, where's the documentation for your APIs? Well, it's here. The API docs are here. If you go to the developer box here, here's our REST APIs and here's all the stuff for how to retrieve secrets, how to authenticate. So it's all right there. There's no gate on it. And so you can go look at this at your leisure. So what we're going to show is how to pull deep database credentials via the REST API or the app to connect to a database. Now I don't actually have a database to connect to except for this last example. So we're just going to show retrieval of the secrets and echoing of those secrets in these first three labs. But to get on with that, this is my demo environment. We do a cube cuddle here. I'm just running with Docker desktop Kubernetes, which is hugely convenient. I used to use Minikube a lot, but now that Kubernetes is in Docker desktop, I seem to only use that anymore. If I do a get pods here in my test app's namespace, and what I'm going to do is just alias that so I don't have to keep typing that. So you guys don't have to watch me type that. Now I can say KGP, and that's most simpler. So you can see these applications have been running for a while. I'm going to first walk through where that helper container is running as a sidecar, the authenticator client that initiates that authentication workflow, that spiffy based authentication workflow where it's running as a sidecar. I can exec into the application container using this handy little script. Now I'm in the application container, and I can run the script which simulates what an application would do using our REST API. So here's the REST call, basically this call here to get secrets. This notation doesn't include the URL, but you can see that here's our URL and the endpoint for getting a password. Basically we're doing that here. We're using some environment variables. So the authenticator will drop that JotToken in a shared memory volume. So this application container has access to that JotToken at this location. If we want to look at it, it's actually in run conjure access token. And so there is my JotToken. This is running as a sidecar. This token will be refreshed every six minutes. So the authenticator stays running, and it's continually refreshing this JotToken every six minutes. So it never goes stale. I always have the ability to run my application to retrieve secrets. So when I run this application, it picks up the JotToken, it basics to foreign codes, it trims the control characters out of it. Erl encodes the name of the variable because the variable name has slashes in it. So basically converting these slashes to percent 2S. And then we make our call to retrieve the secret, get the value, echo the value. So that'll happen there. I can go back in, edit my application. I'll make an air quotes when I say application because it really is just a fast script. But I can say username here and retrieve the username just as easily. Oops, the queue. So now I've got the username. So we'll see this in the next couple of examples. Oracle DB user is the username. Here's a good strong password with upper, lower case numerals and special characters. So and this would be the thing that we would want to rotate. But now we're dynamically retrieving it. It's not part of the application. It's being dynamically retrieved from the service. The identity of this pod is being very strongly authenticated using that spiffy-based authentication protocol that we walked through. We have the access token here. And the application can pick up that access token and use the REST API or use any of the client libraries that we have because there are other ways. And so basically we have JavaGo, Ruby, and .NET effectively wrappers for the REST API. Under the covers, everything's a REST call. These are little higher level bindings for these languages that you may be using provide a little bit higher level interface. But that means that your applications can always pull secrets. And so given that the sidecar is running there, that token is always going to be there and fresh and be able to use to retrieve secrets. So that's our first example here where we've got an application using the API to retrieve secrets. And it would simply use that Oracle database user name and password to connect to the database. Second example now is using another open source project that CyberArch sponsor is called Summon. Summon is a hugely useful tool. It is something that solves just a ton of problems. It is that level of indirection that solves so many problems in computer science. So Summon will retrieve secrets and then call an application with those secrets populated in environment variables or in memory map files. The goal is to keep the secrets ephemeral but not require the application to know how to authenticate or how to retrieve secrets. In other words, the application is kept blissfully unaware of where these secrets are coming from. And so that means that you may be pulling secrets from different places in different environments. So the application can stay immutable. The application's configuration doesn't have to know anything about where it's running. The secrets are simply injected into its environment by Summon. So Summon will call a provider and it's a plug-in architecture. So providers we have for key rings, for S3 buckets, for lots of different things, different back-end systems. So this creates that level of abstraction where you can pull secrets from different back-end systems provided for an application. The application doesn't have to know how to retrieve it. It doesn't know where it's coming from. That way the application in dev, maybe you're pulling secrets from a key ring. Application in test, maybe you're pulling it from, you know, a Azure Key Vault. And in production, you could be pulling secrets from a production vault. So just hugely valuable. So we're going to use Summon in an application, in a Kubernetes application, where the authenticator is running as an init container now. So Summon starts up the application and typically Summon would be your entry point for the pod where Summon would pull the secrets, call the application. Then the application is often running with a secret. So there's never an opportunity for the application to retrieve secrets once it started. So this lends itself to that init container pattern. And if we go over here to my environment and look here, we've got the init container here. Now look, it's been running 79 minutes. And so given that the init container is running the authenticator, we may have an issue with our Jot token because we've already established that only list for eight minutes. So if I go into this environment here, then we can see that I've got a Jot token over here. But that Jot token is suspect. And so when I run Summon, so just to give you a little bit more example of how Summon works, Summon will look by default for a local file called secrets.aml. And this describes the names of the secrets to retrieve. It doesn't say what provider to use. It doesn't say what backend systems are coming from. The contract of a Summon provider is it takes a name in and returns the value of that. So it's taking the name of a secret returning of the value of the secret. In this case, I'm using the conjure Summon provider, which is going to use that access token to retrieve secrets with this name and place it in an environment variable with this name for the username and this for the password. We can see this work if I say Summon ENV and then grep for DB under bar. But it's not returning anything. If I say Summon ENV without grepping, we can see why. I've got an invalid access token. So what I need to do is just go bounce that. So this is the upside and the downside of using an init container in this scenario. We have the potential if the application should ever want to go re-retrieve secrets. First off, we've kind of built in the fact that it doesn't know how to retrieve secrets. But if the application is going to ever get secrets again, it has to be restarted. And so this we can see now we've got a new init container running here. I'm going to exact into that. And now if I say Summon ENV and grep for things beginning with DB under bar, we've got a little bit happier path. We see that same Oracle database user and that same strong password here. And now I can use that to call a very simple application, which could then connect to a database. Whoops, that's not the one I wanted. Web app Summon. So here now, what could be simpler here? I'm simply echoing these environment variables. If I run this by itself, it doesn't have anything to show. Because there is nothing in the environment that has DB under bar in it unless I run Summon first. So I can say Summon dash web app Summon. And now we've got the application has access to those credentials. But as soon as the application exits, those credentials disappear. They are completely ephemeral. And the cool thing is Summon can pull secrets into memory map files. So if you have SSH keys or certificates or even configuration files, you can store those retrieve those as dynamic, in other words, non-persistent files. And what Summon will do is put the secret, the environment variable has the path to the memory map file. So you still retain file system semantics. And that's a very cool thing. So Summon is actually our most active, I was told by Jerry who runs our integrations and open source team, that Summon is our most active open source project. And so, and it's for good reason. It's just enormously useful. It's especially useful for doing integrations for tools that can consume environment variables. And for which it would be very hard to add REST calls into it to pull secrets for itself. So we use this a lot where we don't have native integrations with the myriad CICD tools that are out there. Many of them can read environment variables or files and we can use Summon to populate those and still keep secrets ephemeral. So big, big advertisement there for Summon. But of course, Summon has to be baked into the application image. And I was doing a POC a while ago and someone said, well, why, why don't you just push them to Kubernetes secrets? You know, we've got all these applications that are already using Kubernetes secrets. Why don't you give us the option of using Kubernetes secrets but just, you know, address some of the concerns that, you know, some of the issues around Kubernetes secrets. So this is, again, where the authenticator is going to run as in a net container. But what we're going to do is dynamically populate a Kubernetes secret. And this is kind of the best of both worlds and has proven to be pretty popular. It addresses some of the acknowledged risks that Kubernetes secrets have. And I don't think, you know, anybody's, hopefully, this is all first-hand knowledge to you all on the phone. But there are issues here, you know, and security issues. So first off, they are encrypted at rest in an entity, only if you set it up that way. So you have to enable encryption in the SED store for the Kubernetes secrets to be encrypted. Second, and this is the thing that is probably the most egregious. Version managing is mandatory. You always want to version manage your stuff, right? So that's this. Version manage everything is kind of DevOps 101. But now you've got a manifest that only base 64 encodes this username and password. And you check that into GitHub. So now somebody has very easy access to those credentials. Anybody that can read your GitHub repo can now go through and easily base 64 decode your Oracle database username and password. This is the problem that we're most able to address. Applications protecting the value of the secrets. Now this is a little bit of foreshadowing for the secret list solution that we're going to show in our fourth example. Because once applications get the secret, you don't know what they're going to do with it. They could leak it in a log. They can exfiltrate it for nefarious purposes. And in a user that can access a secret. So applications, we can address this. Users and anyone with root permissions, this is something that just your own native security discipline has to address. Keeping people from being root. Anybody that has root, we were fond of saying once their root is game over, there's really nothing you can do once somebody is root. Because they can do memory scans. They can access keychains. There's nothing someone can do once their root. This is a big part of our core business is just keeping people from being root on any system they're not supposed to be root on. And if they are root, we know who they are and we know what they're doing. But the user creating a pod also has the ability to look at that secret. So foreshadowing a little bit. We'll come back to this when we talk about secret list. But I want to show how we address this concern. Because I think this is the most common experience most developers have is they do the right thing. And I'm going to bet $100. There's at least one person listening to this webinar that has experienced this, where they did the right thing. They put their credentials in a file. They version managed their file. Suddenly somebody has had access to those secrets. And, you know, it's just the way things happen these days. Fortunately, GitHub has started adding hooks where they will alert you to the fact that you may have just checked in some credentials. But Kubernetes, as far as I know, Kubernetes secrets manifests are not one because they're basically foreign coded. They're not obviously credentials. And so this is something that we want to fix. What we want to do is get those basic foreign coded values out of the secret. We want to dynamically bind the Kubernetes secrets. We want to keep the Kubernetes secrets. We want the application to use Kubernetes secrets natively. But we don't want that manifest to be checked in with those credentials intact. So the way we do this is, I'll have to go find my manifest. The credentials is by giving you an ability to find secrets manifest. Okay, secret template here. So this is the manifest that we're using. And this is what we get checked into GitHub. We can see now, we've got our Oracle database username and password, the name of the secret here. And we've got this annotation here. Basically, this is a YAML array. And it looks kind of like that secrets.yaml file that Summon used. So the idea is very similar. When the secrets provider container, so the secrets provider container is an init container that will do the authentication, do that initial authentication in order to retrieve secrets. But it will have a directive to this credential, to this secrets, this Kubernetes secret. And it will look for this annotation, iterate over this, and retrieve the value of this secret, and patch the Kubernetes secret with a basic 64 encoded value of that username and that password. So if I go up to my environment here, and now, you know, this is a great use case for the init container pattern, because it's going to instantiate that Kubernetes secret and then exit. The application has access to the Kubernetes secrets, just like native Kubernetes secrets, but they're dynamically instantiated when that pod exits, or when you delete that deployment, then those secrets are done. So the point is, we're never checking in basic 64 encoded secrets. This value here, the name of the database, is not a secret, presumably. If it were, we could also store it as a secret. But in this case, we're just saying that's not a secret. It's really those access credentials. So I'm going to exec into my injector. That's the way I did it. Yep. And so, and I can walk through the manifest if anybody wants to see how this is done. But basically, I have mounted these, the Kubernetes secrets as both, actually, let me do this. Let me do a kubectl edit secret db credentials. This will actually test apps. This will just kind of show you the effect. So remember our manifest, the username and password, here is our map down here as an annotation. But now, we've got the username and password here as basic 64 encoded values. If I take that, and echo it and pipe it to base 64 decode, then, and let me just add an echo here to get a line feed in there. I've got my username back. So that was my basic 64 encoded username, but it was dynamically patched. That secret didn't exist. It existed only as initially as the value without the basic 64 encoded values. The secrets provider iterated over that conjure map and instantiated those. So when I go into my environment now, they can be mounted as either environment variables or as volumes. So if I do an env grep for username, actually, I mount them as for consistency, I think, grep for db. So there's my Oracle db username and password mounted as environment variables with the same environment variables that we've been using in the other examples. But they're also mounted as volumes. And we would always recommend mounting of as volumes. Environment variables are much easier to discover from outside. So it's something that we would always recommend that you mount them as files and access them as files. And that's basically what this example does for, let's see, where did I do that? I guess I don't have a great example here. Oh, yeah. So my web app Summon now. So the simple application that simply uses those environment variables can simply run, but now we're using secrets. We don't have to use Summon to retrieve it. We don't have to bake Summon into the application image. We can simply use that. And in another demo, I've got one that actually reads the file and uses the file versions. But in this case, these environment variables are populated by mounting them from that Kubernetes secret. So this really gets at this aspect of it. We're dynamically binding values retrieved from Conjure into those Kubernetes secrets, patching those Kubernetes secrets. In the applications perspective, it's just a Kubernetes secret. It can be used as a Kubernetes secret. And then when that pod exits and you delete that Kubernetes secret, then it's gone. The real point is nothing's being checked into into GitHub. There's no secrets being checked into GitHub in any form, whether plain text or in base 64 encoding. These other issues remain. So this is just good discipline in setting up your cluster. This is just good security discipline. But let's talk about these couple of things because we use this example here. You can vault things in storage, you can vault things, you can encrypt things on the wire, but as soon as the application gets that plain text secret, you really don't know what it's going to do with it. And so we see this as a general issue. Our best, all our efforts may be for naught if the application is irresponsible. And so what we have devised is a solution called Secretless. It's basically using a proxy connection so that the application never gets the secret. The application wants a connection to the database or it wants a connection to a web service or it needs to run a script on a remote server over SSH. We want to give the application the ability to do that without giving it the keys necessary to do that. So we do that with a proxy where the proxy is running as a sidecar and the proxy is the thing that actually retrieves the secret and establishes the connection and brokers that connection for the application. So the application never gets the secret. The application has to do its own authentication for users and things like that. But as far as connecting the backend systems, the applications simply get the connection that they're authorized to get. And so if the identity that this pod is running as is authorized, you know, successfully authenticates and is authorized to connect to a database, it will get the connection to the database but the application never sees those database credentials. They stay within the broker and therefore can't be leaked. So you're still suspect, as we said, once you root you can do anything. So keeping people off root, but buying that, then we've addressed a lot of these issues where the applications don't have access to the secret and can't inadvertently leak it in a potentially irresponsible manner. So I'm going to start up my, I should have done this while I was talking, I'm going to start my whole environment here because it deploys multiple backend systems. So the cool thing about Secretless is it's multi protocol, it supports HTTP, HTTPS, SSH, and then multiple backend databases, a growing list of backend databases. So we support Postgres MySQL and SQL Server now. I'm told Oracle is on its way and we get a lot of questions around that. Oracle and SQL Server, you know, the most deployed databases. So what I'm going to do is exact into this. So I've set up an environment here where this window is going to be my application. So I'm going to exact into my Secretless app. And so we'll do a few things here. And in here I've got some predefined connection strings. So I can remember because I can remember all the syntax for all these things. So I've got connection strings for HTTP, SQL Server, SQL, Postgres, and SSH connections. And so what I'm going to do is walk through some of these. So this window over here on the left is basically my pod. This is my application. What I'm going to do here is watch the Secretless broker log. So this is just the log for that container. We can see that it started up listeners on different ports. So the way that the broker knows what to connect to is it's listening. It has service connectors listening on different ports. So we've got listeners. We've got service connections configured such that when we do this connection, we're going to watch the condor audit log over here. We'll see the connector. We'll see the broker authenticate, see it retrieve secrets. And then what I'm going to do up here, this is one of the only ones that really echoes its activity, my engine X server. I'm going to just watch the log of my engine X server. So the first one we'll do is this HTTP connection. And I'm just going to say curl. I'm just going to paste this in because environment variables don't always work. It doesn't work for SQL server. So I'm going to say I'm going to connect to engine X on 8081. Now what's really listening there is my broker. And so this is basically going through an HTTP proxy for local host. That proxy connection is going to this port where the broker is listening. So this happens very quickly. So I'm going to talk through it and then I'm going to do it. I'm going to hit return. We'll see the broker wake up and authenticate. We'll see it hit condor to retrieve the secrets for the HTTP connection. This is using basic off. This is just using basic off back over here. We'll see a 200 message come up here in the engine X log. And then we'll see the client echo. It's just doing a basic index get on that top level entry point and engine X. So the flow kind of goes like this. So it happens quickly. So we'll go there. It happened. Oh, wait. I didn't do my engine X for some reason. I'm not seeing engine X over here. So we saw it successfully authenticate. We saw it return the value over here. For some reason, I'm not tracing engine X log here. We saw it authenticate over here. We saw it retrieve secrets that it needed to do its work. And so this is the workflow that we're looking at to authenticate dynamically retrieved secrets and then use those secrets to connect to a back end system. Now we have other things that we can connect to. So let's look at SSH. So what I've got here are the credentials, the SSH keys to one of my EC2 instances in Amazon and AWS. But we can see my connection string is just going to say foo at local host. This is garbage. This is just there. But so that the SSH client works. So I'm directing it to port 2222 where the broker is listening. That broker is that that is the service connector for SSH. So when I hit return here saying, Hey, you have connected to this before you should want to connect. We saw it hit over here. Now I'm in AWS. So I have connected to AWS without having access to that SSH key, the broker had access to it because it retrieved it from conjure. It retrieved that SSH key from conjure and used it to connect to my back end system. Now I can do stuff up here. I can say curl, you know, check check the status of because this is something I leave running for doing demos up in Amazon. We're in AWS. And so there's, you know, I can check the status of my my conjure and running it there. So that's SSH. We can do similar things for my SQL. So if I look at look at my my SQL connection here, here's got a test app running over there. So I can say my SQL using my native my SQL client, local host connection. But now it's connected to the my SQL database. And I can say show databases. The databases don't do a really good job of showing you the work. Their logs aren't very interesting from a connection monitoring standpoint. So you have to kind of jump through hoops to make them do that. For the last trick, we'll just show SQL server because a lot of people are really interested in SQL server. What this will do is just do a real quick SQL addition. So SQL, C and D is the client for that pace that when I run that, I've got my SQL server answer here. So the if I, you know, do my KGP, actually, KGP isn't defined here. If we look at all the things that are running in here now, we can see that there's quite a few more more pods running in my space here. So I've got my Postgres database. I've got my pet store app. I've got my engine X server. I got the SQL, my SQL server, my SQL server. Then of course, the SSH is going through the SSH protocol to AWS. So the point is, though, in none of these cases, did the application in this space get access to those secrets? Is able to connect to all these back-end systems without using those? And if you look at the way this works, this is very similar. In fact, Secretless is, and could be very, you know, very easily positioned as a broker, an access control broker for the control plane, if you start taking the service mesh type situation. So hopefully everybody's familiar with the terms control plane and data plane, but the control plane basically is where all the complex stuff happens. Applications we want to stay in the data plane. In other words, we want them to be working at a business logic level. We don't want them directly involved with the mess of running the services. And so secrets management kind of has that aspect to it. We want to keep applications as blissfully unaware as possible of the mechanics of authentication, retrieving secrets, of the effects of secrets rotation. We want to actually keep them away from the secrets entirely. And Secretless gives us the architecture for doing that. And so it is that proxy for the control plane that applications can avail themselves of. And it also gives us a point where we could put telemetry on that. We could start monitoring how applications are consuming secrets. And that then starts informing a lot of the workflows that security can do in terms of reacting to anomalous situations and other sort of forward-looking type things. So this is very much a work in progress, but Secretless is a big part of the open-source initiative that Cyberg is sponsoring around Conjure. It's all here under Secretless Patterns of Young to Fundamentals. You can see how it works. You can see the currently supports service connectors, most of which I exercised here. So we see our HTTPS, our database connectors, our SSH connector, et cetera. It also has an SDK, which is very cool if, you know, for some reason you have a back-end system, we get questions about things like MongoDB and other things. You can build your own. And that's the beauty of open-source is we've given you all the tools to build your own. You have to assume that a breach is going to happen. And so risk is often defined as probability times impact. And so you can, and you can focus on those two separately. So how do you reduce the probability that something's going to happen? Well, you limit access. That's good segregation of duties, but the impact is also part of that because the fewer secrets an application has access to, the smaller the blast radius, as we call it. So segregation of duty is something that you hear about a lot, being able to very precisely define the credentials that something has access to. Now, in terms of identifying an offender, that's where your audit logs come in. But in many ways, audit logs are backward looking. In other words, they record what happened, but they don't give you that proactive ability to do something about it. And that's what I think is exciting about Secretless is that it does give you that monitoring point where you could, if you wanted to, and of course, there'd be some overhead in this, but you could monitor the actual real-time usage of secrets and see if things were happening in a much more immediate fashion. But your audit logs, and we keep audit logs non-repudiation. You want to be able to prove something did or did not happen. And you want to say, if something happened, what was the identity responsible for? Now, that identity may move around. So an IP address may or may not be useful in that context. But fundamentally, it comes down to what was the identity in question when we're looking in that, doing that kind of sort of forensic analysis. The conjure solution is in the Google marketplace, but you can always go to conjure.org. I was showing a lot of the content that's at conjure.org. There is community-based support for the conjure open-source solution as well as for Summon and Secretless. You can go to discuss.cybracks.comms.org and see some of the back and forth there. We do regular workshops. We do regular DevOps workshops, just kind of walking through how to secure Jenkins workflows and pipelines, as well as Kubernetes examples. The Secretless brokerry, we saw a good bit up today, as well as Summon. So lots of places to go, lots of content at conjure.org for you to consume.