 Awesome. Hello, everyone. Thank you for joining us today. Yeah, let's go. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, Building Zero Trust-Based Authentication in Healthcare with Spire. I'm Christian Jans, Cloud Strategist at Level 25 and CNCF Ambassador. I'll be moderating today's webinar and we would like to welcome our presenters today. Bobby Samuels, Deaf Vice President, AI Engineering at Anthem Incorporated. Frederick F. Coutts, Head Edge Infrastructure at DocAI. Emiliano, Chief Technologist at HPE. Matuke Schwalli at HPE. A few housekeeping items before we get started. During the webinar, you're not able to talk as an attendee. Even though there is a queue and a button at the bottom of your screen, please feel free to drop your questions in there and we'll get to as many as we can in the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything in the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today by the CNCF webinar at the webinar page on cncf.io forward slash webinars. With that, I'll hand it over to today's presenters to kick off today's presentation. Great. Thank you so much, Christian. Hi, folks. My name is Ameri Khan. I just wanted to give you a quick overview of the presentation before I pass it over to Bobby. First of all, we'll have Bobby from Anthem talking about new operating and threat models redefining healthcare. Then we'll have Patrick providing a bit deeper dive on why particularly organizations within the healthcare industry are adopting zero trust models as they adopt hybrid and cloud-native architectures. Then we'll have Emiliano from HP providing a deep dive into CNCF's with the Inspire projects and Madhu in the end from HP will do a quick demo for everyone as well. With that, Bobby, I'll pass it over to you. I'll move the slides for you. Thank you. Hello, everybody. Thanks for making time today. Thanks for attending. We're humbled to be part of this partnership. I'm going to walk you through the context of why, the business cases of why are we doing this and why are we looking at working in this partnership. Back in 2015, we had a cyber attack, a data breach. You can read about it. You can Google it. Check out information about it. But fundamentally, we had a security model which put a perimeter or a border around our data center and our applications. And as a result of that, we continue to put more of the traditional security around it. And we bolstered it. It's become fairly robust and among the top class, we could go back. Thank you. And so we've got security as a perimeter model. And that's the model we followed. This is a very traditional model that we've used here within Anthem. And then coupled with that, we've got rising healthcare costs. And we're in an interesting time right now with our healthcare costs and what's going on. Many of you know the hospital systems right now that are treating patients that have COVID or some other sorts of issues. There's some fragility in the cash flow. There's issues with being able to get resources to specific areas that need it. So there's just fragility throughout the ecosystem. And so just some staggering numbers is that US healthcare spend is about 18% of our gross domestic product. So the highest of any nation while Europe is around, hovers around 9% or so. In the US, we have some of the life expectancy is lower than the European Union. We have higher import mentality rates. We have higher lab errors and issues that are going on. And for the first time in a long time, US lifespans, US citizens are showing declining life expectancy. So what we know is that even with all of that, that points to healthcare being unsustainable. Just the one out of every three Americans is now drowning in some kind of unpaid medical debt. We're bankrupting our future. And so there's lots of ownership and responsibility and blame to be passed around. But instead of looking at it that way, we want to be part of the solution. So as part of the solution, we're working with technologies to put, and you'll hear a bit more about that. But in the zero trust space, and that's the framework of the foundation for what we're doing. But the idea is that we believe that since we're in the medical policy reimbursement financial intermediary space, we're at one of the junction points, if you will, where patients and providers all come together. And so those things kind of flow through us. So what we're looking to do is create opportunities for cloud native engineers, cloud native developers in the security space as well as in the application space. And I'm going to talk just a little bit about that, but Spiffy, Spire, and you'll hear about NSM and OPA here just a little bit, but you hear about how these projects are all coming together to lay a strong foundation. So if we go to the next slide. This is an overview of the project itself initiative. There's there's several initiatives that are that are built upon this zero trust infrastructure, but this is once again, the project initiative or the business context. So what we believe is we want to be in a place where patient and healthcare professional actually meet. And we want to we want to be part of a team part of a group. It's not a one man show. It's a one person show. Many, many parties coming together to transform how the patient and the clinician actually interact and what's happening in that space and changing the medium in which they talk and they work together. So that patient walks away with the same medium that the clinician walks away with there's no there's there's not a loss in communication that's not a loss in what the diagnosis was what the next steps were and then follow up. There's many opportunities in that space. And as you can see if you search either app stores or sets of app stores you'll find lots of tools that people are developing. What we want to do is we want to make make our data available. Obviously, in a secure secure manner and to privilege developers to come in and engineer and work in this space. So we're in process of validating internal use cases so we're building this actively. And then once that's done, we will make this available to third party engineers and third party groups to come in to create applications based on what the needs are. And then start to drive and then and create those these what you refer to as dock bots for now to come in and to be spread out across this healthcare ecosystem. So to give you an idea of how the pieces look let's go to the next slide. This is the general approach of what we're thinking so I'm going to start from right to left and move us through this. So once again laying the context of this is built all on top of the spit the spiffy spire foundation and a zero trust, zero trust network. So from users coming in. And this will be clinicians or consumers and users like me coming in to come into one of many front and interfaces. That can be everything from things that are showing your your patient history and your patient record. What's happened over different time it could be through telehealth several of you use telehealth I know in the last week. Last couple weeks we've used telehealth for my daughter and other sorts of dock bots and tools that are available and users but also for clinicians that are trying to make decisions. And then that comes into our health OS platform. And it's in the purple, the purple, purple box but the idea is that dock bots are running to pull information to share up raw machine learning insights that have been running that there's AI engines that are running in the background that are pulling together to we're training off of data today to help with recommendations to help with next steps and the most effective treatment treatment outcomes. And then as well as a whole set of developer tools that exist. Everything from sensor tool kits and sensor data that we're able to pull into pharmaceutical information that people are coming in some some people have as many as you know 30 different prescriptions that they're dealing with. And as well as applications for people to come in and say I want to want to partner with you and here's what we think that we can do. And so pulling in developers all the way now to the left side diagram developers AI teams and just individuals people that have good ideas that how we can impact the system how they're coming together. Then across the bottom. Many different data sources are coming, coming together for this. As you've seen, you'll hear later from doc AI, doc AI is partnering with us through this you'll hear from from Emiliana from spiffy inspire and how that's coming we don't believe that we can pull this together on our own and nor do we believe that one single person has the answer. Our fundamental belief is that this will take a partnership across payers across providers and across consumers to make this all work together. So with that I'm going to transition over into and there's more to come on this but the transition over to Frederick, and he's going to take you through the technologies and the pieces of how they start how these pieces all work together for a zero trust infrastructure. Hello, thanks for the thanks for the great set of slides and information body. So, we're starting to see an emergence of zero trust and, but we started a little bit of history. So if you take a look at how systems are defended. We currently are historically we would defend our systems using what I like to say 11 century techniques, we defend the parameters we create most we make it very difficult to come in from the outside to the inside except in very controlled ways. So you end up with a hard outside a very hard and outside, but a very squishy inside. And if we can see this in practice to where if you have a you have a firewall, you look at common attacks that occur where perhaps you have some form of application gateway, like, let's say a compromise version of for an old version of something like Jakarta struts or something similar that that ends up getting compromised because the security updates were kept up with it. And then once they gain access to that system that sitting on the edge has some connectivity out some connectivity internal, then those systems are then used as staging points in order to conduct attacks on the inside because the defense on the inside is more difficult to to pull off in traditional systems. And they tend to be based on things that are not cryptographic in nature. So they tend to be based things on things like, let's go find what server I need to connect to using DNS. Oh, you can place in DNS, you can do your, let's go ahead and reroute, let's go ahead and spoof and route IP addresses it's like now you have something that can, they can redirect your IP addresses to do something else depending on the type of capabilities your, your system has. And so the direction to be that we're trying to push is to move us away from 11 these, these perimeter defense and to move us towards something that gets a some that that solves the problem with cryptographic primitives at the bottom, if you can move to the to the next slide please. Yeah. And so, and so the question that I post where everyone is what if the attack starts here. And so start with, start with that particular mindset. Next slide please. This is an example, one example of many of perimeter defense and so we have our trusted network and stick the word trust in this particular one that that's, that's the key word is like what is the thing that you're trusting in order to establish the rest of your, of your trust me. So we have a trusted network we want to connect to another trusted network. So now we have two parameters, then how do we open the drawbridge to allow those two things to communicate with each other they tend to be through things like VPNs and then we have these workloads that sit inside. And next slide please. So what we want to move towards is a zero trust environment, and I want to be very careful on my wording here when we say zero trust environment we don't necessarily mean that the untrusted network is open to the world, you can still have your your layers of you can still have your modes and so on. But what this, what we're doing is we're saying that the workload, just because you're on the same network as another workload doesn't mean you have implicit connectivity to that particular to that particular system. And so what we want to do is we want to establish secure connections between workload to workload, regardless as to what network that they're that they're on. You move to the next slide please. When you have an attacker becomes much more difficult or even if the game access to the untrusted network, there's still a lot more work that needs to to be involved in order to compromise the system. And to give you an idea in terms of the mindset when you're trying to deal with security. One of the things that you want to do is you want to try to harden your system so that you become a hardened, you look like a hardened and you want to limit the reward. So you don't want to be the organization that has a very sensitive database that's fully approachable and accessible once you once you've gained access to one of these systems. You want to be the one that has all of the auditing the ones with the policy, assume that your system is going to be breached, and then ask the question, what do we what do we do next, how do we mitigate it in that scenario. This is a large part of what this particular environment is is about is like an attacker has already breached your network has already breached another workload how do we defend against against that attacker. Next slide please. So the question is how do we achieve this and I this is not the only set of things you have to do but this is where I believe you should start. You should establish a set of trust domains so the trust domain is a think of it like a cryptographic set of systems or workloads that are all part of the same part of the same trust domain rooted at the top with some form of up in this scenario some API and that CA is able to attest your organization CA can attest a sub organization which could attest maybe a cluster. At the very bottom, you want that that attestation to say that this is a workload so you can say oh I have a payment API, and you're a payment database and so we know each other's identity. And that's done through the attestation which gives you a cryptographic primitive that you can use to prove your identity. In this scenario next five on my certificate which is the exact same thing that we use when you connect to your bank using your web browser, except it's not just you validating the bank, but the bank validating you through your certificate so you have your API gateway, validating the database and vice versa the database validating your cryptographic identity. Once you have identity established, then we're able to build policies and the policy is not based upon what network access control should I set up what IP addresses should I should I open or block. The policy is more what identities are allowed to communicate to what identities how are they allowed to communicate with each other what type of messages should they have what type of constraints should those messages have like shouldn't I want to be able to ask things that are related to my identity or shall be allowed to ask things are related to someone else because we have a, we have a some form of a relationship there. And finally, the last part of this where we really extend this out is we want to be able to establish trust between organizations so you could say like my organization and your organization if we're in two different groups, we can establish that trust at the CA level and if I trust you to a test properly and you trust me to a test properly and we have some business that we are going to conduct with each other, then my workloads can validate your workloads and and vice versa, simply by establishing the trust at the very top and so this gives us the, the zero trust across multiple organizations. Next slide please. So here's one application pattern where we have an identity that covers an app and we've identified a second app and they both communicate with each other with rooted in that identity. So this is the very basic pattern that most people think of. When they think of zero trust, zero trust identity and zero trust connectivity. And there can be policy that controls that particular flow of communication between the between the two apps. Next slide please. So in this scenario we actually say there they could be in two different organizations and so same pattern, just the URL the root of that is a separate organization. Next slide please. So this one is something new that we're also pushing forward as well and that's an infrastructure pattern. So in other words by infrastructure we're saying that not your applications should not should not only be cloud native and should not be the zero zero trust infrastructure should be cloud native infrastructure should be should be zero trust. And so there's a new effort that's going on with the CNCF through the telecom user group and similar and Linux foundation networking and and a variety of other industry org bodies that are starting to look at how do we drive this identity at or not identify how do we how do we move our workloads so that our firewalls are intrusion detection systems are VPNs. Our telecom is running in root and and is established using things like Kubernetes using cloud native best practices by using cloud native network functions is what they call them. And what we're pushing for is for that identity to also be the root of those because they'll start to look like I like workloads, they're your firewall as a service or your firewall inside of your inside of your system. So we can say this identity, this pod should connect to this firewall by policy and this firewall should check to connect to this intrusion detection system by policy and this one of the intrusion detection is allowed to connect to your VPN established validated each level of the chain policy through identity and policy that is that is set up to describe the communication. And so this, so this is a, this allows us to treat our infrastructure as cloud native and to treat it as something that is horizontally scalable and not as not as a monolith move to the next slide please. And so what we want to do after that is then drive this this cross cutting identity up and down the entire stack. So your app shares an identity with the service mesh which serves and I the same identity with your pod and and also drives that identity as a primitive down to your server. You have your server hardware is coming out with with a TPM, which is effectively a, you can give it like a, like a random number generator like a crypto two factor off that stuck in hardware itself. And so we can, so if I ship you a box and and I'm the and I'm the vendor of that particular system I can give you some cryptographic identity that says, we know that that not only that the software come from us but that the hardware came from us as well, and that that a hardware was provision specifically for the use in this particular set of applications or the specific side sets of cluster, and to root the identity based upon those, those cryptographic primitives literally in the hardware themselves, which then are tied together through pod connecting to your infrastructure. Your app, your service mesh is communicating with each other without identity your applications connecting with each other and those and with those identities. Next slide please. And so, in short, though, if we if we say that that zero trust you need to start off with identity with identity you need to start off with some cryptographic identity that you're able to to use as the foundational piece that then that then establishes all the the rest of the chain. Next slide please. So, the interactions that we're using in this scenario so we're starting with spiffy inspire so spiffy is the spec spire and you'll hear more about this in a few moments is is a server that the reference implementation spiffy for the policy that I was describing that that is, and then after open policy agent and open policy agent can take the X 519 certificate as an it has an input, and can also take a JWT as an input and we can make pull out parameters out of that and then make decisions declarative decisions like this API is allowed to connect with this and this JWT parameter should match this based upon the policy policy. And then we're driving the infrastructure communication part through network service mesh, which provides cluster connectivity and network policy so NSM can say you must traverse through this firewall, through this intrusion detection system to this VPN firewall combo in order to connect to your private infrastructure to create or connect to your corporate infrastructure. And so we have policy on what to connect through on there but it's all found. It's all established with identity that is provided by spiffy as a first class citizen. And so those are the type of things that were that we're looking at with with us. Next slide please. And with that, we'll hand it off to Emiliano. So, thank you very much. Hi, can everybody hear me. Okay, great. So, hi, Emiliano Bernbaugh. I work at HP. And I'm going to go over and explain what spiffy inspire are. And, you know, first of all, I want to, you know, thank Bobby and Frederick, it's great working with the MP team on this on this, because it really taking the vision of what we, what we did with spiffy inspire and applying it at a great level. So, just to just to explain what spiffy inspire are for people that aren't familiar spiffy is is the actual is the is a spec of the of the of this identity format. It basically is made up of four things that I'll get into, but it is just a spec that is not an implementation. Spire is an actual implementation of the of the spec that exposes the workload API. And we'll deliver these cryptographic identities to to a piece of to workload or piece of code that's asking for it. Now, the the spiffy spec has been adopted by a couple of different projects, Istio and console and vault are spiffy aware. Also nginx and envoy can receive spiffy identities. And then some other some other companies, you know, I think in general we built this really rich community where we've we've had a lot of some contributions from the from the open source community into the project, big contributors are our Hoover Square Bloomberg and tick tock teams. And right now, we just go back a little bit of history, we joined the CNCF as I think the first sandbox project in in 2018. Right now we are going through the motions of pushing spiffy inspire into incubation stage. We've gone through the security review phase and should be going into community review. Next week. So that should be happening pretty soon and then we will, we will go to the next level of CNCF project. And right now we are looking at our at releasing our 1.0 for force buyer in mid June, and we're doing a couple of refactors of some API's and wanted to get that in there before we we've made our our 1.0 version. Next slide. So, as I was saying, spiffy is the standard and it's really made up of four things. It's just a URI string that URI that identifies that workload piece of code. Think of it as your driver's driver license number. It's the identifier for that for that for that workload. It is, it shows it has the trust domain name in there we have the spiffy prefix which is I know it has been registered with registered with I know, but it is just a URI string. And after that, there is a document that has that that identity in the document and we have two documents that we support right now we have a jot, and also a x 509 certificate. And a lot of the, a lot of the work that that happened early on in the community was to figure out what exactly went into that x 51 certificate. And a lot of, you know, simple mistakes or things that people forget to do that, that the compromise certificates and that that was a lot of great work that we did in beginning to figure out what should go in there and also to make sure that the certificates could be consumed by by more most most PLS stats and other other other stacks. And the workload API is is is how we, how we deliver this this document to to a to a workload. It is a node local API that is exposed and in the packet we just describe the port above definition of what that API should look like. And this is what will deliver that identity to the to the workload. And the last part is the Federation API. So when when Frederick was talking about different organizations and trusting each other. We accomplished this through the Federation API what the Federation API allows you to do is have two different organizations be able to to create a trust relationship between themselves. And then determine what workloads within each trust organizations should talk to each other. The only thing we're doing is delivering all the all the certificates and bundles to those to those two different workloads. And then we step out of the way and to be to be clear, I should have said this in the beginning spiffy inspire really is about authentication. We don't we don't authorize we just authenticate and give a piece of code its identity that that to do other things with and I'll I'll touch on that a little bit later. Next slide please. And so the other part the other project is spire. So what spire really does is we take the spiffy spec, the API specs and we've actually implemented it. It's it's run as a an agent that shares a kernel with that piece of code and a server that holds all the policies and upstream CAs for for signing. And really what we're exposing here is that that API that who am I API a workload to talk to this API either directly by using a library to g rpc push library, or you could have an air gap and have a proxy in front of it. We could talk to onboard or engine X. There's also a nice little proxy from the square team called ghost tunnel that speaks the workload API directly. But what ends up happening is that either the proxy or the workload contacts the API, and then that sets off attestation and what attestation is is the process by which we introspect that piece of code. And then we get its characteristics, and then the agent will determine if this was the piece of code that it was expecting. And if it is it will deliver the identity to that to that piece of code. And I'm going to just walk everyone through that process explaining how attestation works. Then after we're done with that, I'm going to show you to everyone a demo of this coming up the next slide. Like I was saying there's two pieces to this is aspire server and aspire agent. The spire server is what's holding the policy. So here we're saying that we will give the identity of billing payments to a workload that meets this criteria. Can we go to the next slide please. So what we're what we're saying is that if this piece of code is running in an easy to instance that has this security groups, and it's in this pod namespace, and it's running on this, this service account, and also has this binary image that it is the payments at workload. So what I want to emphasize is all of this, a lot of this logic, a lot of the things that are doing this attestation at the node level and the workload level are plugins. So we have a catalog of different plugins for different environments. So we can take the solution and have it run on on on EC2 or the different cloud providers we could also run Azure or Google cloud. We could also run on prem and on different orchestrators to so that the architecture of using these, you know, having that architecture plugins has allowed us to lift and carry the solution to two different places. Next slide please. Let's walk over over what this looks like but there is you're running a container on an instance, you have that Spire agent that is exposing that that node local API and right now, the way we do this is you have to share a kernel with that with that workload for us to do the introspection and attestation on that workload. And then there's a Spire server that's running somewhere else. You could run these, you know, in HA mode or different different ways. And this is this is the baseline. So can we go to the next slide. Okay, so the first and the Spire agent does when it wakes up it will talk to the underlying infrastructure. Again, this is pluggable so for this type of attestation the the Amazon, the Amazon plugin would go in and look at the at the Amazon infrastructure and deliver the metadata back to the Spire server, basically advertising hey this is this is who I am. Next slide please. And then what the Spire server does it goes back and verifies it says hey was I expecting this piece of this agent to be running here. And we're what we're doing here is we're actually doing attestation of the of the node itself, and we give it it's an identity to the to the node also so there's two things we identify here we identify the agent running on that node and then the agent on that node will identify the workload. Can we go to the next slide please. And then obviously after after the Spire server verifies it he he talks to the Spire agent. Next slide. Now, now we're ready and the the container could talk to the to the workload API. So just to be. I'm going to run through this, but just everybody understands there's going to be some signing going back and forth. A lot of this happens at a different order for optimizations but we're just going to logically walk through this. So the container or the workload talks to the workload API and basic and tries to to get its identity queries to see who it is. Can we go to the next slide. Now the Spire agent will go out and introspect the container. Also, he's ready. You also look at the node local Kubla that that is also pluggable. So like I was saying that not only is a node at a station pluggable so is the workload at a station is pluggable and extendable right so if if we don't have anything in our catalog that could always be added. So go to the next slide. So this is the part that happens beforehand but what happens is one of the big things we do is we never, we will generate keys on that on that easy to instance keys private keys never leave the box. What we do then is create a CSR request and send it to the Spire server for signing. Now the Spire server could have its own self signed cert, or you could tie it up to a CA upstream so you could chain these things together. And that Spire server here logically would be the trust domain or the root of the root of trust for for these workloads. But again, you could change these together, you could, you could federate the Spire servers. But the Spire agent sends the CSR request the Spire server who will sign it and can we go to the next slide please. And then the Spire server will return the correct chain back to the Spire agent. Next slide. And the workload API returns the keys back to the container. So now the container, the workload has its identity. Now the container could do could use these to create MTLS connections across across its fabric. So if you imagine that there would be other easy to instances with Spire agents that are that are that are delivering these identities, they could create an MTLS tunnel, and then talk to each other direct process to process. Another thing is that this API is is a push API, and it's tunable so all of a sudden, you can start putting tight rotations on these certificates. You could rotate everything up to the up to your root, because everything's delivered through this API. And we have the ability to rotate pretty quickly and push these things out at a higher cadence than would be normal. And, you know, another thing I want to emphasize is a lot of what we've done here is that we really inverted the relationship or a lot of times what you do is you push certificates or identities to things. What we do here is that we have a pattern that attestation policy that we saw in the beginning, and when something meets that criteria we deliver its material to that to that piece of code. And then the next thing that we've done beyond this right beyond the the MTLS and then communication and this is what I'm going to show. The next slide. Okay, so just just to finish this thought out. One is we have taught other systems to to to understand these these certificates. So a lot of databases and systems can do X 509 authentication. So we've, we have the ability now to deliver these these tokens these search that are ephemeral and rotate and avoid using usernames and passwords so you have an attested identity for this piece of code strongly attested. And again, one of the things that we could do is as Frederick was showing we could actually go down to the to the hardware level to the TPMs and verify that piece that that that whole stack from from chassis all the way up and deliver that that identity, right and rotate and now the workload could take that identity and talk to databases or to to other clouds. So we're also going to show the JOT. Oh IDC Federation with AWS where you you don't have to distribute those those heavy tokens but use our tokens to talk to these other systems. So I'm going to I'll stop right there. Thank you so much and I'm going to hand it off to do so that might be a second as he sets up. But thank you so much for attending. Hey folks, how's it going? Hi, this is Madhu. I am today I'm going to walk you folks through a couple of scenarios. So the core. So yeah, I'm going to walk you folks through to a couple of scenarios here, right. So the core functionality of Spire is to prioritize framework to deliver and manage cryptographically verifiable identities. Right. So why do we need this right like so traditionally what we do is use some form of secret material right so we have in this scenario the first scenario that I'm going to show you folks is about authenticating to postgres database right. We have a Kubernetes workload customer service here, which wants to connect to customer database so traditionally what we do here is the embed some form of secret material either, you know in the form of configuration, or bake them into your image through your CI CD pipelines, or really deliver these through secret stores to your workloads right like so the so the idea behind Spire is not necessarily, you know, like provide just identities but really use these identities to authenticate without requiring any sort of secret or sensitive material to be embedded into your workloads itself right. So this scenario that I'm going to show you folks is about using the Spire server and Spire agents, which deliver the SWITS to your customer service and using those SWITS we can authenticate and validate and create an empty list connection between the workload and the database itself right. So we have a Spire agent running on both sides of the connection here, which are connected to a centralized Spire server, and each of the workloads here receive their SWITS through the workload API that Emiliano was mentioning earlier. And once we have that we can establish those empty list connections right. So the first thing I want to show you folks is about something we talked about earlier called registration entries right so what are registration entries. So in this scenario we have the customer service right so this is the entry for the customer service. So what this is really saying is I want to register the SPIFI ID to a workload that is running on Kubernetes cluster and has the specific container name right so this is the property of the workload which the Spire agent attests right so this is a workload API whenever the workload tries to connect to it, it attests the workload to have this property and only provision it X519 certificate and deliver it to the workload with the SPIFI ID if this property is satisfied right. So there are other corresponding registration entries in our application but the main thing that I want to focus here is there is an additional entry here additional field here and what this is really saying is in addition to just providing this in your X519 certificate also use a field as your subject CN which corresponds to the database user right so so the idea here is to use existing authentication primitives that are native to your Postgres database and somehow use that and mint that into your X519 certificate right so in Postgres if the subject CN I'll show you folks how the Svid really looks like and here is an example of that Svid right but we are using for the customer service that is trying to authenticate to the database and if you look at the subject CN here what we see here is that the same user right so this is the user the database user that is embedded into the Svid itself so since that is embedded in there now we can authenticate to our database here using the construct of authentication that is native to Postgres SQL itself right so the same Svid that you could use anywhere else now can be used to authenticate to a database right so that's kind of the first use case here let me show you the actual application so here we have this application this part here is coming from the customer database and the Postgres database and if it is able to like receive that and show it on the web application is able to like show this because it is able to authenticate to the database right so now let's go ahead and change this registration entry so disable this right so what I did was the registration entry here I changed this DNS name to something else right so this is no longer the database user and if we go back here and refresh this page we no longer see that customer database coming in so that's the first use case about authenticating to Postgres database using X59S right so the second thing that is you know personally to me is more interesting is authenticating to an external public cloud like AWS without requiring any sort of credentials right so traditionally typically you would need a workload that is running on let's say like a GKE cluster which is on Google Cloud if you want to like authenticate it to Amazon RDS which is on AWS you would need to somehow deliver these AWS credentials to your transaction service which is a workload on GKE right so how would you do this is again through like your you know like your CSED pipeline you embedded or through secret store but really you still need that secret right so the whole notion about using spires like get away from storing secrets anywhere you know like so so the idea here is to have a web PKI setup between your spire server and your AWS so in AWS we have something called open open IDC which is a provider right so you can set up a federation between your spire server and your AWS account and create an identity provider within your AWS right so what that helps is once we set up this web PKI is for spire to transmit the public keys to your AWS setup and those public keys can be used to validate the JotSWIT that the transaction service here receives through your spire agent so further this can be embedded into your roles and the role itself can verify and authenticate the IAM role which is native to your AWS and authenticate and establish that secure connection between your transaction service and AWS so the other thing here is about like setting up that web PKI we are using like let's enter through ACME protocol and so that that's how we do that right so just to show you folks what that looks like we have a token a JotToken here so this is the JotToken the transaction service uses right so if we look into the JotToken what we are seeing here is that it has the subject CN defined to the transaction service with VID and we have an audience that is set to my RDS and if we actually look at the setup here on AWS console we have a role right so if I go back here we have a role and in this role if I look at it it is federated with identity provider right so this is the identity provider which is receiving your public keys which it uses to authenticate the JotSWIT right so further in here we have conditions for this particular role to be assumed if these conditions are met right so one is the audience that corresponds to the JotToken and this is the speed VID in the subject which corresponds to the transaction service speed VID so if I go ahead and change this to something invalid here and update the policy right so now I still need to like revoke this because to revoke the kind of existing active sessions but this role is also that if we look at the policy here it is showing that it has this particular role can connect to the RDS database which happens to be the MySQL database right so if I go back here and look at our application and if I refresh this this information which is coming from the transaction database should no longer be coming in it takes a while before this is expired it expires in just a few seconds there but the idea here is to like establish that you know like a secure connection between a workers that is running on your GKE cluster and RDS database without requiring any sort of credentials right so this is kind of the main use case here at this point we should be able to do that so this here while we're waiting for that to expire I want to show you like this this is part of the OIDC federation this is part of the standard and what we have here is the well known end point which has the open ID configuration and this is the jox URI on which we deliver these public keys to validate and authenticate the service that is coming in so now we can see that the jox desperate is expired and we are no longer seeing the information for the transaction service on the web app so that's pretty much demo at this point I want to like you know like give it back to the mayor to some closing remarks yeah if you can just keep the slide open it might be easier could you bring this slide back up sure thank you everyone for joining before we go in the Q&A I just wanted to remind folks that if you wanted more information on SPEFI inspired please visit SPEFI.io we also have a very active channel Slack it's a great place for you connect with like like minded security engineers, platform engineers, all who are trying to adopt SPEFI inspired within the infrastructure please looking forward to seeing you a lot of you folks on the SPEFI Slack channel so looking at the questions now Emeliana I'm going to ask this question to you I think thank you for the presentation I've also seen something related to this Google workflow identity what are our thoughts on it how is that different from SPEFI inspired Emeliana you want to take that one or Madhu? yeah I mean I think one of the big things about SPEFI is that it's you can take it and have it run across multiple different clouds or environments it's not really tied down to only the fact that we have a plug-in model and our solution that can run with a lot of what we're trying to do is span these different walled gardens right where a lot of these different clouds or environments have their way of doing things right so if you're on-prem you might be using Kerberos or if you're in the cloud you're using the different different metrics or different ways that these systems identify things and what we do is we can sit on top of that so I don't know this particular details about the workload but our solution works in and out of communities if we could work on instances and I'll take the next one too it says from Ravish so the I think the big thing with Ravish about if we could if we could start going away from secrets and for management it really depends on the systems right a lot of these a lot of these systems Emeliana can you repeat the question too sorry so the question is does this mean that we could somehow somehow make two totally different systems talk to each other without having to have any secret credentials so it depends right it depends on what systems they are so for a lot of a lot of databases and cues a lot of these things will do X-Fiber9 authentication to those systems if they have that support for X-Fiber9 certificates then we could tie it into those systems if the systems only allow use names and passwords then you have to use name and password we are looking at adding other protocols in the open source it's something that we're looking at but if it's X-Fiber9 or JOTS then we could do that I think one of the things you didn't see with what Mathew was doing is that when we send our JOTS to AWS we'll get a token back and then we exchange that token to then talk to the RDS system and that all happens underneath but if something does OIDC Federation you could teach it to talk to that endpoint that we expose then it could consume our token so it really depends on the systems that you're talking about and you want to add something to a GKE question GKE workload identity and so my understanding of how that works is that it ties your Google identity to your Kubernetes, your IAM identity to your Kubernetes service account and then you can use your service account secret in order to access Google-based services or things that are rooted within that IAM identity and that is a fantastic way to do things so like we're not saying you're using this therefore you can't use that so if the thing that you're working with it increases very well with that environment or with that security system that is something that you can definitely bring into it and tie into your infrastructure when you're starting to look at like how do I give my cloud to talk to your cloud or what if I'm on AWS or I have things that are crossing on-prem or so on then Spiffy and Spire provide a much more broad solution towards this and because it's rooted in the next 509 certificate that means that anything that can work with the next 509 certificate even if it doesn't get full mutual TLS style support still can still get some support you can still tie it into things like your key rotations and so on to know in order to help establish those connections and identity of course and use something like mutual TLS using TLS 1.3 then you'll get the full benefit of it and so I don't have anywhere to add on that at this point that's a big topic Christian do you want to I think we're just about to close so I think before I pass it over to Christian if you have more questions Frederick, Miliano, Madhu, myself, everyone is on Spiffy Slack as well so we're happy to take some questions there as well Cool, great. Thank you so much for the presentation. It was great. All right, and that's all the time we have for the questions for today. Thank you for joining the webinar recording and slides will be online later today. We are looking forward to seeing you at a future CNCF webinar. Have a great day.