 Thank you for coming to the demo. I'm Anousha Iyer and we're with Corsha. It's a cybersecurity company based in the Washington DC area and we're using hyperledger fabric to develop a cybersecurity application and we'll talk through a little bit about the problem space, how we've architected Corsha, obviously do a demo and hopefully save some time for a question or two at the end. Thank you. So here's the problem space. So essentially what we are trying to solve for is stopping the use, stopping unauthorized access into systems and services through the use of stolen API credentials. Guess how many of you in the applications and frameworks and platforms that you're building leverage APIs today. Show of hands. Pretty much everyone. Today primarily API authentication is done via a handful of ways. One is a static API key. I'm sure you've used cloud platforms, you've probably generated an AWS API key for example. Another is using a token based format, something like an OAuth 2.0, OAuth tokens or JSON web tokens. The challenge here is these end up still being rooted in a static secret. So there's a client secret that you have to leverage to request a token. And then certainly PKI certificates for doing mutual TLS is another way to do authentication. Again the challenge there and the challenge with a lot of these approaches is these are oftentimes long lived, sometimes set to rarely if ever expire and can be reused. So they're kind of prone to the bearer model of authentication. If I have it, I can use it from anywhere pretty much. And adversaries are really kind of wising up to this and almost shifting their focus in attacks from human usernames and passwords to API secrets. Because oftentimes they also are larger pipes and access to information. So increasingly we're seeing organizations trying to put in API security strategies in place. And so at Corsa we're focusing on authentication, machine identity and so forth. And so as we were putting the platform together it was very natural for us to leverage something like hyper ledger fabric as a core part of our platform to manage machine identity. And that's essentially what we do. So we have built this API security platform. We've been working with fabric since about 2018. And the platform builds out dynamic machine identities for API clients. I'm sure you all are familiar with multi-factor authentication on the human side, right? So the idea of a Google authenticator or an RSA token or some form of TOTP that layers on to basic username password authentication. Well what we've developed at Corsa is a way to do fully automated MFA using these dynamic machine identities on the ledger. So that means that every API request can go with a one time use credential. And essentially, so just to walk you through the flow a little bit, what we actually do is we're very agnostic to kind of what the flavor or the form factor of the API it can be. It could be anything from a Kubernetes pod to a Docker container or a virtual machine. Even an industrial IoT device. We're actually doing some work right now with the U.S. Air Force Sustainment Center where we're protecting API access from industrial IoT equipment sitting on a shop floor, right? So anything where there isn't necessarily a human identity to back and protect or validate verify the access. And so what we do is we hook in at the time of deployment. Just like a Google authenticator, we have a Corsa authenticator that deploys with the API client. It's uniquely seated at that time of deployment and then starts establishing a dynamic identity against the ledger. Essentially what this means is it sends off a cryptographic heartbeat off to the ledger on a configurable interval. So think on the order of hours, let's say. Each beat builds off of the previous one and forms essentially a chained identity on the ledger. Now in front of the API services where you want to enforce MFA, what you do is you place our proxy, right? And that proxy is going to be looking for these MFA credentials on API requests. They just come in as custom headers. And the client can now, based off of its chained identity, its moving identity, produce these one-time use credentials. Comes into the proxy. Proxy picks off the MFA credential that's actually built off of that identity, checks it against the ledger. And if there's a match with that machine's identity, you let the call through. Otherwise you block it. So very similar to kind of how an out-of-band TOTP check would work. And here we're using hyperledger fabric for that out-of-band element. And so it's the ledger that's kind of allowing us to do the automation and maintain these identities going forward and even do things like provide the ability to monitor, halt, resume API access. And we'll show this during the demo, in fact. So let's say, for example, these are workloads that are pushed up to AWS. And they're doing some analytic processing, maybe processing on some data, sending results back home. And all of a sudden there is a security event in AWS that comes in on M2, right? Through having this kind of mechanism in place where you have a fine-grained ability to control the machines, you can actually, through the management console, remotely halt API access for M2, right? Investigate, see if there's actually an issue. And if it is, take the machine down or resume it. But it gives, it really allows you to minimize and mitigate the impact of any kind of API security authentication type incidents. The platform itself is based on Kubernetes. So we've built it where we deploy fabric to Kubernetes and really try to stay platform agnostic so that we can deploy to all of the major platform providers. So that's AWS, GCP, Google, both on the public and the government clouds. As well as on ESXI infrastructure, even air-gapped on-prem environments. As you can imagine, some of our customers on the US government side are sort of sensitive about where they put things out. And so we even have completely air-gapped deployments that we run. And it's very much a drop-in solution. So, you know, that's been an important part of our usage of fabric is to make it seamless, not require code change on either the client or the service side, and really abstract away a lot of that interaction with the ledger. And with that, we'll go into the demo and then hop over and dive a little bit deeper into what the architecture looks like at the end. So let me see if I can... Hello. Okay, so I'm going to show it in action. So what we have here today is a demo of pending API access to trusted machines. What we've done is also in the cloud set up a mock API service that represents a service that we would be calling with Corsche protection. Also in the cloud, I've set up a mock API consumer here in the bottom left. And that's going to be a machine that's deployed with a tightly coupled Corsche authenticator. Since it has a Corsche authenticator, it's able to call this API service, which has two endpoints, the first endpoint without Corsche, just requires a simple API key. We've added a second endpoint with Corsche that requires that API key, but also your dynamic one-time use Corsche credential. Then I'm going to show as an outsider an attack where I steal these credentials. When I hit the without Corsche endpoint, I'm able to make that privileged call. Then when I hit the with Corsche, even stealing that second factor of the Corsche credential is insufficient to make the call to the with Corsche endpoint. Switch over here to the browser. We have a little web front-end on the front-end and back-end. On the front-end, this is what's making the request. It's looking up fingerprint results, let's say, from a sensitive database on personal fingerprints. On the back-end, we've just configured it in the front-end to log its request that it receives. For this first attempt, I'm going to turn off Corsche protection and not use a valid Corsche cred. Let's see what happens. We send our request over and we get back a little JSON response. In the back-end, we see our request. For the purposes of the demo here, we're logging this API key and the request object that was sent. Well, now I'm in adversary and I see, hey, these guys log their API key in the clear. I'm going to steal it. Here you go. I'm going to copy it. Come over here to a tool called Postman, which lets you kind of block out test HTTP REST APIs, if you're familiar. I'm hitting the same back-end endpoint and now I can fill in this API key and I can make a request of my own and send it off. And sure enough, using the same static API key lets the request through. And when I go back, I can see in this back-end, I have two different requests from two different IP addresses. So here's my adversary attacking with its own request. So now let's flip back to the client and turn on Corsche protection. Turn it on. I use a valid Corsche cred. Now my trusted user is using Corsche protection. We're hitting a separate endpoint here and we're getting back the same data and we can see it in the back-end going through. So once again, we've logged our API key, but also again for the purposes of the demonstration, we've logged our one-time use dynamic Corsche credential. This is a long base 64 encoded string. Copy it. Now my life is more difficult as an adversary because I have to go into headers and try to paste that in cleanly and once again, steal the API key, which is still static and we'll send it off. And all we really see is the attacker is 403 forbidden. We're not really going to tell them why. And sure enough, we don't even see the traffic reach the API service on the back-end. I want to show a few extra requests through the mock client just to show that the credentials really are different every single time. Sure thing. Yes. So every time I send one and again, we're logging it on the back-end. We're logging it. And right, if you get into the content of it, you're going to be changing every single time. There you go. Once you get past the headers, you see the part that changes. And so let's say we find a reason that I no longer trust this client. So we have the ability of Corsche, as Anusha mentioned, to kind of turn off and on and monitor this API traffic. So I, as an administrator, will log into the Corsche administrative console and look at my set of machines. And this little proxy that I've been egressing through, I don't trust it anymore. So I'm going to halt it. So I need to make really sure. Now it has a status of halted. This updates the status of this machine's identity in this fabric blockchain. So now, even though I was a trusted client, everything I send gets blocked. So as the administrator, let's say I go back and what I thought was malicious behavior was just a little hiccup. I'm going to turn the access back on. I've root-caused it. I'm no longer worried. So I resume access. Once I resume access, you can see a little low level, but we enter the status of needs rotation. What we do is we rotate the underlying PKI every time we halt or every time we resume the machine or on demand if you want to periodically refresh the PKI that kind of represents the identity for this machine. So now that I've rotated it and resumed it, we can get through with a brand new fresh stream and everybody's happy. So the idea here is that we provide a lot of the abstraction and security hygiene around identity management for purely machine-to-machine communication where really you don't have to worry about PKI certificates or even within fabric themselves, the certificates expiring between nodes, between any kind of components where you're relying on good hygiene around those things. And I think we are out of time, but happy to talk through this later. We have figured out a way to deploy fabric and put a fair amount into orchestration that we're happy to share. We actually have a talk tomorrow around integrating with SERT Manager, but standing up one of these environments for us takes us about 10 to 15 minutes. And we've done it on all of the major cloud platforms, both public and gov, air-gapped, fully on-prem, and now are obviously dealing with a real-time application like authentication are able to support on the load of thousands of transactions per second and keep latency down to a handful of milliseconds per request. Thank you for the time.