 Hello. Can people hear me? Yes. All right. If you can hear me. Yep. Thanks, Zara. I'm going to paste the meeting link again, the chat. Evan and Andres, can you guys hear us? Can we check the audio to see whether you guys are able to speak? This is Zara, and I'm here. Hey there. How are you? Good. Thanks for your response. Yeah, no worries. Yeah, I will definitely sync up with you. Yeah. Yeah, so I put the meeting document in the Zoom chat. So please go in there and Mark will put your name there for attendance. We are looking for two scribes today. So if anyone can help scribe, please sign up as a scribe. We're getting from Evan and Andres. Are you able to test out your mic? Just to make sure things are working. Hey. Okay, great. Perfect. Sorry about that. Awesome. No, I just just making sure that we are all good on the AV section. Okay. And we won't do check-ins because we're having a presentation today, right? Yep. Okay, that's. So I think most, there's quite a few people that will be at RSA today. So I don't think we may be missing some of the regulars. So I'll start off quickly with announcements and then we can, we can jump right into the presentation. Seems like we have a pretty lengthy one. So again, I'm going to paste in the document over here. Please sign in with your name. Announcement for today. Again, if you haven't checked out the cloud native security day, the zero day event, a day zero event at coupon, you check it out, get your tickets. And I think was it you, but this and Sarah, we are looking for meeting facilitators in March. I think that was probably Dan. Oh, okay. Dan is lead chair this month. And next. So, so yeah, so sign up. If you are willing to facilitate a meeting processes in the governance, there's also we hadn't had new members in the last month. And I don't think that's actually true. So call out that if you've been attending for a while and you're not on the member list PR yourself in, we would love to have you officially join us. Yeah, I'm going to paste the new member, the LinkedIn new member page as well. All right. So, I guess, and that's a salon with the spiffy spire assessment. So I think I'm going to hand this over to. I also just signed up ascribed as somebody want to help mostly like just chime in and write down things. If I happen to ask questions or talk. So we'd like to have two people so that somebody else can, so that people scribes can also feel free to participate. So yeah, I think Andrew's Evan. And the, the other folks from spiffy inspire going to bring us through a overview of what spiffy inspires. And we're going to spend about five to 10 minutes on some of the, the results of the assessment as well as where you can find. The documents that we created from this assessment. And then the rest is going to be acuity. So, yeah, Anderson Evan take it away. We can't hear you by the way, or at least I can. You're able to hear me now. Yeah, perfect. Thanks Brandon and thanks Sarah. As you said the agenda here, given the time that we have available is to provide a very high level overview of both spiffy inspire. Talk about where did it start? How did we get to where it sat today? What has been the work throughout that time? Where are we going looking forward and wrap up with the summary of the security assessment that we just conducted you being the lead reviewer along with the help of Emily Fox, Justin Capos and other community members. But that said, just to set the stage of some of you may be familiar with spiffy inspire given their both CFCF projects. The goal of the, of these projects and the catharsis of it really stands from providing an identity framework that makes it easy for workloads for software components of a distributed system, established trust, given an untrusted network. And we're going to, we're going to peel into that, pull the thread look, look into what does that actually mean and why would we have even bothered to do that in first place. The products were first accepted in the CNCF in March, 2018. It is very important to think of these solely as authentication, not authorization. It certainly helps the conversation. Certainly people start associating a number of things. One preface with spiffy is not authorization. Authorization is out of the scope of the project. It does provide identities that can be surfaced to authorization frameworks to interoperate with each other. One great example of another CNCF project is there. OPA, open policy agent, spiffy and OPA are great compliments to each other as your reason with authorization, authentication frameworks, holistically. Also, it is not transport level security. It can be used as spits. We're going to talk a little bit about what that is. Spiffy identity documents, those can be used to facilitate TLS or jot signing. But again, it's not a component of spiffy nor spy. Let me actually switch with Emily on the seats here because I'm right on the sunbeam. I'm starting to cook a little bit. So I had to get to the presentation earlier than I should have. So just tripped down memory lane. We're all started right around KubeCon North America 2017. The community got together as a whole to define a specification to tackle the problems spiffys after. And that problem is how to establish trust between software components of complex distributed systems over untrusted networks. Systems that have the properties of cloud native. A big part of that, the assumption being while a team or an organization may own the application, they may not necessarily own or manage the infrastructure that that is running on. So how do we solve that? And we looked around for technologies that existed at the time. TLS being one of those, it certainly has been available for a really long time, widely used for establishing trust from a browser to a server across the internet, wherever it may reside. However, we did not see a lot of TLS inside of the data center. And why would that be? Not because it doesn't address the problem, but getting certificates and key material for establishing TLS. It's quite a cumbersome problem when you look at systems that are elastic and dynamically scaled. Certainly it's very easy to put a certificate on a bare metal server, but a pod that may have just been instantiated. It's a harder, it's harder problem to solve for us. So in coming together, one of the first things is how can we reason about automation that will keep the promises of PKI, but at high velocity and very large scale. Building around that, we said, well, let's, among the spiffy specification, we need to define a number of things. Let's define what does an identity look like. And you can refer to the spiffy docs and look like that is your eye formatted. It's human readable. It is custom from there on the format of the identity document that proves that identity describing a workload. And those formats, we initially supported X5.09. We added barrier tokens not long after that. There's motivations why you may want to use one over the other. And third, we define a debt simple API for workloads to retrieve and validate a foreign end of the foreign party of the communication, validate its trust and tying it back to TLS. The property it confers is not only establishing trust between two systems talking to each other, but establishing an encrypted channel, all the benefits and goodness that you get out of asymmetric cryptography. We couldn't just leave it at the spec. Obviously questions arise. Well, how does all sounds great? How can I put this to practice? So a lot of ground work went into defining the tool chain that we implemented in practice, which is aspire, providing a reference implementation, softer components that people could, if you're running Kubernetes at the time, or you're running any other X86 unit space system, that you can start to implement the spec and have it running. Now, fast forwarding a year into that and having spiffy inspire and spiers starting to grow and solve for base use cases. A lot of the work went into extending support to integrate more tightly to different cloud providers, different middleware, different orchestration systems. We designed the system for it to be extensible and to support future use cases, but at the same time for people to, I haven't spent much time talking about attestation, and I can clarify what I mean by that. But in order of, as you're describing an identity that is assigned to be, assigned to a system in order to confer it, you need to go through a set of checks. You can define a policy and say, if this system is about this tall, it looks this way, it has its properties. This is the kernel metadata that is available. This is what the orchestration platform knows about it. This is the metadata we get of CICD provenance. Tying all of this and if those checks are met, the identity is conferred. Obviously, those vary from environment to environment, and there's a lot of different properties and metadata depending to the different cloud providers or different platforms you may be deploying and inspiring to. So we built a bunch of plugins for Asher, Google Cloud, AWS. The community contributed a big part of it. A lot of work went into integrating Spiffy and Envoy and utilizing the Envoy SDS interface. One of the motivations for that, a lot of people were looking at the spec and writing their own apps and putting in libraries teaching Spiffy to the application. But as they were modernizing traditional apps or things that have had already been containerized, they could utilize Envoy as a proxy to deal with all the logic and just delegate Spiffy inspired to the infrastructure and not have to customize much of that. Having addressed for interoperating across multiple platforms, multiple environments, within the spy around, Spiffy grew in parallel at the same time. We saw adoption and embracing of the spec by a lot of different open source projects, some of them CNCF, Northworthy Ones, STO, HashCorp, Consul, Network Service, MASH, Nginx, Raymatter. The list is not limited to those. If you know of others that we may not know, let us know on Slack, let us know at the end of the call. And it's been really organic. It's been fascinating to see it grow. Now, that work has been super steady and not necessarily anticipated. We knew we'd get to that, like addressing for all those use cases that I've alluded thus far and people establishing multiple spy environments and different teams implementing it turn out that every system was modeling for a single root of trust. Certainly the need to authenticate between different spire implementations or like Spiffy implementations if you have a Nestial Service MASH that may need to communicate to an spire environment for a separate platform, the need to federate spire to Spiffy and Spiffy to Spiffy arose. And that is at a very basic level, how can a system validate the keys of a foreign system outside on a different root of trust without exchanging private key material, solely exchanging their public keys. That is something we've started to document. At the most recent KubeCon, Evan did a talk with a gentleman from Google about Spiffy Federation with Istio. Please check that out. There's a wealth of information there on how can you set up multiple spire domains to federate with each other. And at the same time, and people coming to, going through the journey of reasoning about Spiffy inspired conceptually to adopting the technology, embarking on rolling it out to production and having solved out for workload to workload and essentially service to service. It also came up the desire of, well, if we get all of this from Spiffy, can we extend it to connect to other types of systems that we may have not necessarily built ourselves, but we consume via MySQL, via Postgres named the database. People wanting to do secure introduction without having to pre-share any key material, user names or passwords. Can we use the Spiffy IDs to directly authenticate to a database? And from their own as well, if we're able to do that, shouldn't we also be able to directly authenticate to not a workload running in AWS from my on-prem environment? Can I also authenticate to AWS directly? Can I get an STS token in exchange of my Svid and go to RDS or go to Aurora or any other cloud service? That is somewhat of a recent use case. We have solved for that and we did a number of demos at KubeCon San Diego of using Spiffy alongside with OIDC Federation. Once again, just extending Spiffy Inspire to talk to via the database, cloud service, the database not being OIDC, but as an extension to that, you could also be talking about like secret stores here doing secure introduction to Vault as another example. So with that, I provide a very spitball run-through of Spiffy Inspire taking a closer look at the governance of this projects. As I mentioned, this work hasn't just been sitel, the corporate entity that we work for and is stored in the product. A lot of it has been driven by the community. Some of the community members that are project owners, I call out the names here in the slide, Joe Beta, John DeBonis, Mark Lakewood, Tyler Julian, these folks oversee and have ownership of different components, different parts of the two products. As part of preparing for the assessment, we spend a lot of time in CIA best practices, making sure we have the right boss factor. And some of it like we had to adjust and adapt ourselves to meet those things, but it was also very reassuring that the way we'd structure things from day one allowed us to pass all that right here. Not much to spend there on that. I've shared some community staffs. These are available from the CNCF deaf staffs' dashboards. We have seen a pretty steady incremental trend for every single one of these. Number of contributors, number of contributions increases, stars have been increasing. As for SPIR, which is once again, if you suspect SPIR is actual implementation. We did the CIA best practices for SPIR. We're currently at a passing as a recommendation from Brandon, Emily and Justin. We're currently pursuing the silver batch. That is the next level up. One of the big items in there is code signing, which is something that is actively being worked on and we should be able to have available sometime in the future. Where are we going forward? Based on where we find us today, we talked about solving workload-to-workload. We have solved for an initial set of use cases, authenticating to cloud providers, continuing to extend the list of plugins, different platforms supported. But until very recently said like, well, we've been doing a lot of work. Our guiding principle has always been the community of what is top of mind for our end users. And we reassessed on, well, we want to make sure we enable for accessing end users to do very large scale production rollouts. We're talking today in the scale of tens of hundreds of nodes, aspirationally people are wanting to get to hundreds of thousands of nodes in the short term. So around enabling those topologies, we have some work to keep the lights on and address some technical depth, not just supporting X509 and nest-inspired topologies when you have chains of spire domains to also support better tokens. At the same time, the APIs have been stable for a while. However, there is the assumption today there is the requirement today that spire agents share the same kernel as the workload. With the advent of serverless technologies, that is a little bit tricky to deploy an agent in there. That alongside some other motivations, we are through, we completed the sign phase of API refactors for node and registration APIs so we can address future use cases such as agent list deployments, being one of the ones some others being. Agent list deployments, custom code that you might write to snap into an orchestrator deployment system, for instance, making those things easier to do. Also, we've got the better of a year and a half, two years worth of organic growth on the current server API refactors. They've suffice for the timing, but as things evolve and as we look towards cases that don't involve an agent that just involve workload talking to the server, perhaps. Those APIs are a little bit difficult to consume and on the back end, it's been a little bit difficult to maintain as it evolved organically. We've been doing that, knocking out those additional use cases, just making everything a lot more cohesive given all the learnings we've had in the last couple of years. And in working on that for a common release, we came to the realization, hey, we cannot neglect the user experience making Spire easier to consume for new commerce or those yet to be initiated. Let's update our existing client libraries and also expand that list. Let's put a lot of attention into documentation and integration of critical use cases, conceptually how-to tutorials and completing this release from there on, we can start to look deeper into the stack vertically at testing from a hardware root of trust or testing throughout the software development life cycle. How can we integrate better with frameworks like in Koto? How we can integrate with code-signing technologies like Notary and Tough. Let's keep focusing on the experience and make sure we're not being complacent or overlooking that. Let's make sure those things are documented, understood. We can have interactive tutorials for a lot of these things. Looking further into the year and again, these are forward-looking statements. Make the work that we do with the refactors enhanced for agentless workloads. Once we have binary siding, TPM siding, design, let's implement that work. Leading to that, I skipped the other case, it's going to be a little bit of a shortcut. Just calling out big items here. A lot of it is that they zero, they one, and from there on let's do a lot of automation that keeps the promises for a day to a makes, lives of operators easily and yet the goal of making infrastructure functional to security as opposed to something developers need to spend a lot of time on is fulfilling the vision of the project. With that said, I think we can checkpoint here and spend time talking about the assessment. What do you think, Brandon? Yeah, so before we jump into details of assessment, I think this would be a good time to kind of, if there are any questions about surface fire clarifications, let's get through that before we jump into the assessment. I have not been looking at the chat window, but I'm watching the chat window, but I tried a little bit late. Hey, everyone, this is Saravanan here. Can you elaborate, you shared any one example where traditionally this is how the credentials have been passed along within the property file or configuration file. Now with this fire and spiffy spec and with your SVID or whatever the token names are, this is how the new design and the new connection will work. Is there a way that you can fit some like? Meaning? Credentials are short-lived. There's no hard-booted secrets here and they're automatically rotated customized to the intervals. Probably the fault is the time to live with Nestle. An hour. An hour, but you can fine-tune that to the requirements of your environment. I would say like, traditionally, the traditional solutions push the management of secrets onto the operator, right? So, you know, the best thing that we really have today is probably a security secrets where you can say like, this workload gets access to this secret and it's kind of just comes available to the workload in the booth because the operator knows that it should have access to it. That's great. It solves one of the problems that fires all of this. How do I get that first secret? How do I get that first thing, right? But it doesn't solve other problems, right? Somebody still has to create that secret and store it as a secret in Kubernetes. Somebody has to manage the rotation of Kubernetes which is also not secure. Spire addresses all of those problems additionally, right? Because we expose something directly to the workload that doesn't require authentication and can automatically rotate things as they kind of go and expire. You know, there are other like more rudimentary approaches include, you know, access to the vault for instance, which again, like you still need a vault to have some access to the vault, right? Injection and like CICD deployment process which, you know, causes other problems. Obviously, rotation is a problem in that scenario. You're also like giving CICD access to all the secrets in the world in order to do that. So all the traditional approaches, number one, don't very few of them at any, don't really treat rotation as a first-class citizen. Very few of them address the secret zero problem or how do I get the first spread in. And very few of them operate in a way that pulls management overhead off of the operator in terms of managing the lifetime and lifecycle of the secret food-strapping thing, all that stuff is fully automated by Spire. So there are some existing solutions, all some parts okay, not well, so many other parts, firing that kind of like eliminate all those pains by saying, you know what, like, forget this secret management stuff. Like what we're really looking for is strong assertive identity and we can manage the lifecycle of those identities, the rotation of those identities, the reputation of those identities, et cetera. In a centralized way and a way that they're very, very highly automated. Removing the human from any of the steps in the lifecycle most importantly. None of those tasks are for management. I know that that, does that answer your question? Sure, yeah, okay, definitely. Thank you, yeah. I mean, I'll, yeah, I know you mentioned about the secret zero. So that is where I think it kind of becomes like a catch-22 or someone has to create it and someone has to share it. After you shared, you know, it's just out of your control. So what I understood is initially, I think the problem statement that you mentioned is, you have, you are deploying your workload in a untrusted network, right? In this, but then you need to safeguard your workload. So that is the reason why this Pifi-inspire spec evolved. Yes, there are a number of reasons, that's one of them, to provide like, a uniform notion of identity, which is not like tightly coupled to the underlying runtime or platform, is another reason, right? Like anytime that you have something in your data center that has to talk to an AWS resource or anytime that you have to cross any of these kind of like IM boundaries or platform boundaries these things become a problem. So that's another goal, right? And I'll mention that, you know, the approach that's Pifi-inspired to solving these problems do not require somebody to create that secret zero. All of that is obvious. And in order to automate that, we lean on, we, you know, a grace mentioned earlier that the system was designed to be super extensible. So all of like the kind of core logics that are involved in the automation of these processes are all portable, right? So if you're on AWS, for instance, we have an AWS plug-in that knows how to like call the AWS API and see like what the instance ID of this caller is and like assert that it is your machine and assert like it is in this particular region of this auto-skill group or the security group or what have you. The result of all those tracks and authorizations is the secret zero. So the creation and management of that secret zero and injection of that secret zero, that problem is like 100% negated through the use of specific fire. Sorry, that problem is 100% I kind of negated, negated. Yeah, the secret zero problem is essentially solved by specific fire through this kind of agitation processes that Andre's mentioned before. But the short of it is that you can do a node and you can do the workload under specific fire that does not have any secret date into it or injected into it and still go from that to, okay, now we have an identity and we've issued a key. Definitely. Search it tomorrow, right further on that and how we don't get to the cache 22, this is policy driven. Like, spire is not seeding the secret zero is not doing the provisioning and delivering of that secret zero from, hey, we're only automating the pairing of the credential that we're getting from somewhere. It's, we're chaining that sequence all together to like we define policy that has to be tested against and it's just bootstrap. If this thing looks this way and it can admit proof, how can it go from, I've never seen this before. I have no idea what it is to, I understand very well. I have read it now and fingerprinted the code. Here is the keys to what you have access to. Yeah, so I think it's kind of like, it's 100% if you trust the model attestation that you want. So like, if you have a cloud provider and you trust the attestation service for that, then in that model, the secret delivery is you solve that because your assumption is you trust the cloud provider. And then if you don't trust the cloud provider, then you would have to have the attestation be something to some hardware root of trust or something like that. That is a great point, Brandon. And there's something you have to trust, but like we spend a lot of time with them users on attestation strategies, identification strategies. Yes, you have to trust something and what is that you trust, but don't just trust one thing. So another analogy is multi-factor authentication. Well, let's get, what can we get from the AWS as this metadata API? What can we get from Q-Lit? What can we get from the Linux kernel? What can we get from the pipeline? And you can compose from all these different sources the information that's being realized. Hey guys, sorry, this is Vinay Venkatragwin here. And thanks for the explanation, but I was just wondering maybe if we could go down a little deeper for maybe a specific example. Let's say that you had a workload that is running on a Kubernetes cluster on GKE that wanted to talk to some kind of a database that is running on RDS or Aurora or something, right? So now the points of integration are, is there like a SPIR agent running on every node as a daemon set, and then there is a federation. So where is, and there's a config that tells the workload to contact that particular agent, presents some kind of ID which then gets federated. So how does the permissioning and where does the root get attributed? Can you walk through, does that make sense to walk through that kind of an example? Yeah, I think there's a handful of questions kind of packed into there. So I might, in the interest of answering the final question, which was, how does my GKE workload talk to S3? I'll briefly describe some of the earlier ones and then dive into the last one. So yes, in the current model, there is an agent running as a daemon set. So there's an agent on every node, right? Those agents follow this node attestation process that we've been discussing that allows fire server to positively identify which GKE cluster is coming from, which node in that GKE cluster it's been deployed on, all that kind of stuff. Every agent exposes what we call the workload API. So the way to think about the workload API is just kind of like the AWS SSI Dandy, like the metadata API that AWS exposes, GCP also exposes like a metadata service to every node that the node can call. It does not provide that a K2, but it can get something back, right? So the workload API is that the agent exposes, it's very similar to this way. It is available only on that node, right? And it's not only exposed as Unix domain socket. So what happens is every pod that glosses on that node that the agent gets a Unix domain socket injected into it and that socket starts the workload API. And this Unix domain socket is it one per node which is fixed at some configuration point which needs to be pointed to the workload? And yes, there's one per node and then the workloads get pointed at that socket. We're exploring some models of some future work that may allow an agent to expose multiple domain sockets. But for now it's just one. And then we inject that one socket into all the containers. And that socket is unauthenticated. So when one of these pods boots it gets this socket injected and the workload talks to the socket, it does not need any kind of secret or credential to do that. Okay, and then what's the next step that needs to happen? So the next step is that there's been, Spire has been configured with some policy that Andres is kind of talking about before. We teach Spire about the shape of your workload and when it recognizes this workload to give it identity Alice, so to speak. So when the workload boots up, it talks to this Unix domain socket and it says, hey, here I am, give me my identity. Spire says, okay, we figured out that you're Alice. Here's a jot that proves that you're Alice. Yep. Okay. Now is the part where Alice can take this jot and pass it to AWS. So we talked a little bit about Federation before. Federation is a way that Stiffy trust domains can exchange their public sign keys in an automated fashion. This Federation API is highly aligned with OIDC, OAuth OIDC. So a lot of OIDC providers are compatible with this thing. So what you do is you configure AWS for an external identity provider. You pointed at this Federation API on your Spire server. And then when Alice comes along, Alice calls a STS service and says, hey, I want to exchange this web identity so to speak with us in AWS terms. I want to exchange this web identity for an STS token. AWS is able to validate that Alice's token by using the Federation API bridge. And then what Alice gets back is an STS token that she can now go to access S3 or whatever other AWS resource they can use. So implicit in this is two types of configuration, right? One is on the AWS side, which has its own permissions, which says there is some identity that could be federated through this other entity, which has access to RDS or whatever. And then on the Spire side of things, the Spire server, which actually does that Federation, there is another policy which says, what would that policy look like? That policy, there's not really any policy that would be required on the Spire side around Federation. We just exposed the endpoint at a configurable address. But the policy that you would define Inspire is about the workload, right? So you would say, hey, you need to know who Alice is when Alice calls you, right? So we'll describe Alice. We'll say, hey, we have this workload called Alice. She runs in this namespace and with this service account and it should be this Docker image ID and it should be all of these other attributes you can kind of tie together. And say, when we see this, all these attributes meet, that's Alice. Right. And that's the end of the Spire. I'm sorry, maybe what allows Alice to talk to that RDS instance? Ah, that is an AWS side configuration. So, yeah, on AWS, when you configure this token exchange, first you configure it to point back at Spire, that's Spiffy Spire is the identity source. And second, you create a mapping in AWS. So you say, Spiffy ID Alice maps to IAM role through. Got it. So I think we have another question from Chase about node level versus workload. And why do we deploy an agent and not a site card? Chase, do you want to say your question out loud? Yeah. Yeah, there's an evolution there based on the granularity of my misunderstanding. But sort of the base of the thing is, and I'm thinking, it's not a question of design decision. I'm mainly just wondering, a lot of the initial description there from the previous question was kind of talking about the node agent and authenticating kind of at the node level. And maybe the authentication at the node level involves metadata pull that apart, so to speak. But essentially we're still trusting the node to tell us the truth. Or at least, you know, and that's a reasonable assumption or we're trusting that the node understands the workload enough or I don't know a better way to say that. But I guess I'm wondering, does all the communication happen from the node itself? Does the pod itself reach out post? Like, does the node say, okay, you got brown eyes, you got red pants, you're a good node, right? You're a bob. And then individual workloads on that node are then able to kind of add on to that in series and say, you know, you have image ID, whatever, in your hash of your code bases, whatever. Is that an in series thing? Is that one exchange that happens at the node level? And mainly I'm thinking in terms of like service mesh, you know, which kind of operates in a sidecar fashion, the workloads themselves are sort of can be completely agnostic from the node. Or at least that's the way kind of, we're running things now. And so I'm just wondering kind of where does the exchange happen for sort of this heuristic profiling of node versus workload? Am I making any sense? I think I understand where you're going here. So that comparison, that attestation we call it, that comparison that you were saying, you know, how tall are you, what color shorts are you, whatever. That kind of happens twice, right? So I'll back up maybe by saying that a lot of folks have a very strong desire to know not just that you're my authorized workload, but that you're my authorized workload running in the spot where you're supposed to be running, right? So the node angle is there for that reason, right? The way that those challenges, these attestations work is that when the agent comes up, you know, we don't rely on anything, like the agent certainly communicates information to the server to say who it is, but the server has all sorts of checks and balances that it goes through to assert that that is true, right? Once that happens and that completes, the server is able to say, I know for sure that I have a connection open to instance ID 1234 and Amazon region, US West 2, or what have you, right? The result of that interaction between the server and the agent is that the server now issues identity to the agent. So everything that happens after that is done with like mutual TLS between the agent and the server using this kind of like foundational platform level identity. Beyond that, we have like also taught spire about the workloads and what shape the workloads are and who they get signed by or with all this kind of stuff, right? In addition to defining the shape of that workload, we also say where that workload should be running, right? So that workload might be running on Kubernetes cluster one or it might be running on a very specific instance inside Kubernetes cluster one, right? So it is up to the agents to measure the workload, right? And the server, for lack of a better words, trusts the agent to do that in a trustworthy way, right? But the server still has authorization control. So, you know, if you have a node compromise or an agent gets owned or something like this, you can't just call the server and ask for any arbitrary thing you will only be able to obtain identities that the server says these are the workloads that are supposed to be running on this node right now. So it's kind of like a two level process. And I hope that I'm answering your question here. So I'll stop there just to make sure that I'm not going in the right direction. Yeah, you are. I think I'm with you now. I wasn't sure about the two stage thing and really what the second stage meant, I guess, was to speak how the workloads themselves were directed through the agent. And so that may... Yeah, so when the agent pops up and it talks to the server and the server is able to identify and initiate an identity, one of the other things that the server tells the agent is, hey, here are all the workloads that you're authorized to run, right? And here are the shapes of those workloads. When you see workload with this shape or that shape, here's the thing that you're authorized to get for it, right? Right now, the agent pulls down that list and then tashes off everything in advance. So if there's an outage, a server outage or something like that, things kind of continue to run. There have been some requests to maybe do the just-in-time issuance, which doesn't really affect the security model, but may affect some of the scaling properties. But as of now, there is this two-step thing where the agent comes up, gets identified, understands what it's allowed to run. And then when the workload calls, the agent already knows, hey, I know that you're Alice because I see all these parts. The server told me to be expecting you. I see all these things match up. Here's your estimate. And you can control the granularity of that node attestation rule, too. That's the only big thing. So if you want to scope it down to one node, you can. You can loosen that up and put it to a bigger part of your fleet. To take it back up substantially, we distinguish the two status, the first being node attestation, the second one being workload attestation. And those are separate. But obviously, yeah, once node attests, we say, hey, here's a list of specific IDs and some lectures you should look out for that are available to you. Right. Let's wrap up this question, maybe. And then we can go ahead with the assessment specific things. I'll take from your chat message that we sufficiently answered your question. OK. I think I'll grab the sharing. And then I can go through the assessment. Sure. Why don't we do that? Yeah. Can you drop off the sharing? Yes, I can. Stop share. I do want to share. It was very beneficial for us to go through the formal assessment with the current process. We had conducted, well, the team had conducted an initial assessment about two years ago, Justin Kapo, Sabah, and a few other folks. So this is the second go around, but the structure definition, it was super easy to follow. And just the level of attention we got from the reviewing team had helped us identify a bunch of blind spots we had. Yeah, it's a pleasure working with you guys as well. So yeah, so let's jump into it. So I have created a PR for the SPI assessment branch. And so all the information about the assessment documents over here, let me go through them quickly. So if you want to make some comments on the self-assessment or the summary of the assessment, we can go to this PR, which we'll put the link into the meeting notes. And so for this assessment, let me go through quickly what the summary of it was. I think the self-assessment that was brought to us by the team was very, very comprehensive. The most comprehensive one we've seen so far, it was really long. And I think this was partly because you guys have been to the process of adjusting and a lot of the details will read that. So I think it was really smooth. So the summary of this, we have the security assessment. And the background summary is kind of like what we just talked about. And the recommendations that we are making for the project is, one, they have a lot of information about TRED modeling materials that I think are very useful. And then we want to see these items on the SPFY site. One of the other recommendations is also to expand the security response team to participants also outside the site. And the last one, which Andres has mentioned just now, is to work towards the CAIs of a batch, which SPFY Inspire is already very close to that. And at the same time, our recommendations to CNCF is because SPFY Inspire is a security-centric project that we want to ensure that we conduct a formal review or audit for this. So kind of like whether this would be a trail bits or a formal security review for this. And also, the advanced users for SPFY Inspire, where you have federation and more complex topologies, is something that we think would be helpful if we have additional materials or information that the CNCF can help educate users about. So the assessment itself, I'm going to go through it really quickly. So this is a really long document. It talks about more details about what exactly the goals and don goals. And we had a few additions to this to kind of make the document more comprehensive, more readable to a user. So there are a lot of definitions here. There's a use case, multiple use cases, talking about MTLS in a lot of detail. So this document is meant to be readable to the general security user. And I think one thing that we really want to highlight in this assessment is that on top of the very comprehensive documentation is that the threat modeling that was done goes really, really deep into this. So we have the security analysis of the different functions, the different plugins. And then the team also created a lot of details on the different types of attacks. And also, they have this matrix over here, which really talks about what are all the different attack scenarios and what the risk of them. The red here doesn't mean it's bad. It just means it's highly likely. So don't be too alarmed. What you want to look at is actually the value of the score. So yeah, I think that overall, this has been a good experience working with the SPFY team. And I think we are almost out of time. So if you have anything specific to the assessment, you can write some comments in there. Andres, Evan, and folks, anything from your side? No, I think from us, we just want to make sure that obviously this is our first time going through this new process. And we just want to make sure that we've not necessarily met the expectations, but that all the questions have been answered and that there's nothing that folks were expecting that we haven't presented or done or addressed in some way, shape, or form. No, I think we are all good. The only outstanding item would be to we will bring this up to the TOC and ask them if they want an overview of the security assessment. If not, we will just send this e-set of documents to the TOC. Yeah. Yeah, if not, I think this was great. And thank you for bringing us through SPFY Inspire. Yeah, thanks for that. One thing I want to call out briefly and Brandon, you mentioned this recently was, well, we wrote this without making any assumptions of prior knowledge or familiarity with SPFY Inspire. So for those who are not familiar with the product, if you read through, it's very much an introductory guide. It's not just the security properties or the modeling. There's a lot of background, concepts, information, use cases there. So yeah, it's a good starting point if you're interested to learn more. Yeah, if you want a bedtime reading, this is a good document. Emily or Justin, if you're on the call, do you have any comments, questions? I don't. I just want to say it was excellent working with the team and working on my first security assessment with everyone. Awesome. Great. My question. I'll just add that I had a fun time doing the previous one. And I do want to, because we're also congratulatory, I do want to throw in a slight, not really curve ball, but thing for us to think about, which is from the time that I did the original, like whatever pre-assessment thing, whatever you want to call that, until now, there's been a very substantial change in that they now allow federation of identity across different SPIR servers. And so I think a question for us to think about is as we evolve things over time, how do we do that? Do we just do it so that we just add the text in the document with no indication of what's new? Somebody has to look through the commit history, do it in some way of saying, here's the addendum that happened in the last year or whatever when we do a reassessment. So that's just more of a process thing for us to think about and to consider when we're thinking about what we've assessed and what it applies to and how that evolves over time. Yeah, I think that was a great point. I'm hoping to also add some thought share to how we might be able to accomplish that. I think that's definitely an important point and there certainly has been some drifts since that original assessment was done. That assessment took a very long time and I was very involved. And so a low overhand way to kind of keep these things updated would very, very much be appreciated. Making sure it's long left, it needs to be relevant. Awesome, well thank you everyone. So before Dan, is Dan Shaw here? Dan Shaw is not here. So yes he is, but maybe he's not on. So Dan and I were gonna stay after and on Tuesday is the TOC meeting where we monthly present stuff from SIG. So we have some graph slides if, you know, if anybody wants to join a small group, right? Like I'm particularly like Brendan, Justin, Kappa, like whoever's like, you know, in some kind of a project lead role. If you have something, you can also just tell us in Slack, but if anybody wants to stay on it and we're just gonna work through the slides a little bit about what we're gonna present on Tuesday. If you happen to be free, otherwise just ping us on Slack if you want something included to report back to the TOC. So now that I've unmuted myself on here. All right, so everybody here, are you free to go? Bring it right to you. Hey Sarah, just real quick. There's registration numbers and the SIG security events channel. If you wanna incorporate those in your slides. They'll do. All right, thank you everyone. Thank you. Hi. Let's see if I can get back on Slack video. Slack Zoom. Ideally we would like stop the recording and start it again so that we don't have this like epic. Should we all leave and then come back? Well, I think we can, who signed in as SIG security? I'm gonna sign out. Or we can jump over to my Zoom. Well, I'll just, I don't wanna interrupt. So why don't you bring up the slides? I'll go log in as SIG security and unless you wanna, whatever you wanna do, Dan. I hate to do this, but I have another meeting that's gonna start in a minute. Okay, do you wanna just tell us what you're? Yeah, I mean, I've been working, other than the stuff that's kind of obvious from where we're at with assessments. Brandon and I have been working a bit on the landscape thing and have some progress and updates there. I don't know what else. I mean, I have more personal things that I've been doing with the notary, V2 stuff and things, but I don't really have anything else to report to the TSC that I can think of unless there's something you think I might be missing. Oh, I just counted up. You, Brandon, Justin, Emily, who's not here are voted in as tech leads. We have four of them. Congratulations. Yay. So last night, Dan, I just, I added a slide. Let's just stay here. I'll tell. Nice one, yep. Good job. Can you bring up the slides? I put, like, I made a... I'm disembodied audio. Oh, I'll find it. I'm sorry, but I am going to have to go out. Oh, yeah. Feel free to jump. I'll ping you the slides if you want to review. Yeah, and if anything else comes up, just ping me on Slack and I can provide more context. So thank you. And I'll try to watch the recording of this afterwards and see what I miss. Okay. Thank you. Oh, yeah. Don't worry about this. We'll just send you the slides so you can review them. Thanks, Justin. I'm grabbing them. So I'll stick them in the chat. And I'll project to do this. All right. So the, um, the one I added was just like a, like I took like the email and, um, kind of shortened it so that it would be, um, uh, just highlights and pictures. Um, so like, I'll say, I think it would be good for the three of you to like, review it and make sure it's accurate. Feel free to. Could you, could you, um, my, my name has a spelling. Oh, what's your name? It's V. R. A. N. D. O. N. Oh, and a brand and. Oh my God, I spelled it wrong when I sent it to everybody in the world. I'm so sorry. Um, I'm going to let you. See, usually I have this auto correct thing. Um, Emily. And. Dr. Nate all the tech leads into our monthly reporting. Welcome. Welcome. Through the administration of sick security. Um, we have, uh, we have a new member this morning. Yeah. Oh, you already have to do it. Okay. Yeah, we have so many people that are on the calls and don't actually sign up. Yeah. Um, we, we. This already happened. The reason why I was team meetings was because that was uh, happening. And that was one of the things that, uh, I thought folks might want to go back and, um, Um, Yeah. Maybe like, you know, recent. I would do that or like, uh, Uh, we set highlights. Maybe we can say, we do the cloud custodian. Which is really from December, but where it's sort of top of mind. Right. And then maybe. Date on here. And I think if we link these at a bigger reset highlights, I don't know if there are any other presentations in the last that we want to highlight um if somebody can look back in the I've um look at it we did the one where Jonathan met those oh that was great yeah let me find the date for that one then it was months though I think but I don't know that we talked about it well we have actually passed things here 22nd January yeah yeah so it was we because of the vote we didn't have a February so then threat modeling he said that was January 27th yep okay so I think if we link the videos that would be cool I want to put a comment remember to do it and then we also want to have cloud native security day every time you say day zero I think you're gonna talk about a zero-day exploit yeah and then do we want to you recently made a triage board I'm so looking on it it's not ready yet I wonder if our project tracking board is up to date oh we should have the this is a good way to talk about security but I have not done anything on the microsite JJ is working on the pot I don't know what happened to the policy white paper never really got finished I did ping Howard about like Kimmy land that PR oh the map is this what you're working on Brendan sorry if I turn the landscape stuff well representative it could be like would it be accurate to say that the work you're doing falls under this issue or is there a different there's a totally different thing this was kind of looking at the old landscape I don't think Justin I created an issue for it which we should would you can you took a look at this and see whether this should be like if we're not going to do this work and we're doing what you're doing instead maybe we should modify this yeah this is really like this this is a placeholder for what do we do after we have a draft from two years ago right right so like next steps on the landscape but you could also make a totally different new issue if that if there's other stuff that's worthwhile in this issue right but it's just not what you're doing yeah I think that I can probably create a new issue that references this one okay that'd be good yeah one day fall and then we want to add the new cloud native security day let me just check whether we don't have anything with this thing oh this we did I'm gonna call it security landscape okay well it's really v1 we have a draft just say like beta next iteration iteration to how about that okay because I want to call it v2 because we didn't have a v1 not to be picky but can you make it a project if you put the project tag on it then it automatically shows up okay my ad card yeah and the cloud native security day must not be must not have a project tag okay it's still a proposal I think we can elevate it to a project right now that we were already announcing it oh and then we need the oops here to reload is there anything else that you can think of that's like going on like a project um well the specific like it should be a project that we just did well those are on a different board okay they get their own board let me just up here right here they just appear as the first five security assessments right I think that's the umbrella project and then some projects get their own board well we we had like minor things happen like the logo and design page the review tooling additions to supply chain okay I don't think that's big enough to be a project okay we'll just yeah maybe just skip it but I think that this kind of overview would be nice and so we can do that next just put a note here because I it's hard for me to screenshot when I'm in this resolution maybe we should have a whole screen for cloud native security day because like there's the site and that's the registration so so we want to add registrations Dan are you still there I am do you know do you have any thoughts that can you see my screen are you just listening I can do you have thoughts about the order the sequence for these things Techleys first you know I would put them in order as we present them in that first screen right so tell them what they're gonna tell them tell them the thing and we probably don't need to do the tumbling I guess you've removed the well I removed them because it was a slide so cool cool the first and then administer diva yeah we'll do cloud native security day last because it's like upcoming stuff right okay because then it's like more in sequence order and so we just do like administrative in the middle well not administrative but like an overview of what we're doing really oh what everybody's gonna ask about is the projects that are in the queue so we have a different thing that we now need to manage which is this project review proposal review or something like that maybe we should just wait for projects so there's a there's a process there's the new process that's been defined right it's not really a new process it's in theory what the CNC CNCF has done is document the current process however every it seemed like every project was going through a slightly different process because oh I process right yeah so like for example this is a very nice thing that like Liz pulled together after there was like a whole bunch of different PRs right which sets up what is actually happening right that there's this sort of low barrier to sandbox a really significant barrier to incubation and then incubation to graduation is should be just like a matter of the project growing and getting better and checking like you know like it shouldn't be really so it'd be it's a big barrier just because there's a lot of work to do not because we're putting a barrier in front of the project right it's a maturity thing right so it should be a natural path from incubation to graduation and so the key thing here is like cloud custodian I don't even remember how they were referred to us exactly I'm not sure that they were I think somebody told them to go talk that that if they wanted to become a CNCF project they needed to go for the SIG so they never filed a GitHub issue for a project proposal like they were asking me should we do this you know should we finish the whole assessment review thing or should be short circuited and just do the due diligence right to you know just become a member of the CNC at first and then come back to this right and a big how I answer the amount of work it is to get to just get into the CNCF without the assessment depends on whether they're going for sandbox or incubation and that information isn't present anywhere right so instead of asking them I was like oh we missed a point in the process please file this GitHub issue right something should never really come to us until the project has done a GitHub issue but there's all these projects that are in like out-of-order things right and then what we have to figure out so we have now two projects Dex and Camu who've been referred to us who have who who have filed this who are going through what is now the well documented sequence of things to become a sandbox project and so we have to be like okay how are we going to handle that so presentation this is this assessment or just a general presentation well this is so I think we have to decide what we want to ask of them right when they do this and we have talked about what we would sort of our plan of record was that we won't ratify anything about the assessment until after the first five but that generally we would ask people to do a self-assessment before they came into CNCF and but not block approving them based on our review and now wait so there's like a now what ended up in the document is that actually the SIG review is a really low bar it like specifically says it's a very light weight review and that it is not due diligence so this was a it's possibly that it was just confusion on my part but it was certainly not written down clearly they are decoupling so now there's in the incubation phase they do they have this due diligence where what they wrote down is the TOC may delegate that to the SIGs my guess is insecurity they're always going to delegate it to us right like nobody's gonna be like yeah I want to do diligence on security project where they have us right but it's not written down that way right so what we could do is like you know what we need to do is sort of figure out where does our assessment fit in here my guess is it fits in here we're not gonna like ratify anything until after we do the first five and we clarify our process and we know how long takes blah blah but we should come up we should have like some like theory of how we do this but right now we're over here in the process right where the SIG assessment this the TOC has defined project presents the SIG and this is for any SIG right we can tweak it and say hey you're doing SIG security we want to do XYZ which is a little different right that's fine as long as it's sort of it's sort of in the spirit but the spirit is it's lightweight right it's not due diligence and I also like don't want to get into the situation where we ask this project like you know like we did with key cook here do a ton of work now the TOC is gonna reject you yeah that was kind of sucky so the self-assessment is it says one to two months here which sounds like SIG assessment right so I think this is also is allowing for there's a queue oh okay I suppose we get an influx of ten projects right okay so set the expectation well it might take a little while we might have a full calendar okay so one to two months is not the it's a it's the total waiting plus processing time not just processing time well clocked it okay yeah good good good made it for that so we have this not defined process right for how do we do this you know the default is like you know chairs and tech leads you know we like you know take you know just do whatever and then the there's a PR here for well there is a PR for a template so that a bunch of the chairs of the other SIGs we're like oh my god we're gonna have to do all this and they're really small like they're like you know they've got like three to five people in the SIG right in some of the newer SIGs and they're like oh my god having to do all this stuff and so and I love that they've made a template right so they're basically like okay let's have a template of what like what are all the things and so we don't want it like I think the idea is that whoever does this this is this my thinking that what this is is this allows us to potentially delegate this to somebody in the SIG to write this down and it's sort of a it's a little bit of a checklist okay well have they done a presentation you know like do we know like it's a quick reference to all the things so that we can say like okay we looked at this we understand it blah blah blah and so this is just a proposal right which is like this was written before the other doc but this is for the TOC right and it's not really for us this is us providing the TOC with information okay well it's sort of for us so basically the TOC is saying hey SIG hey project go meet with the SIG tell them all about yourselves they're the subject matter experts so that then they can tell us whether they think you're a good fit for the CNCF and we're not decisional we're like a we can we can give a soft no that the TOC can override right and our yes is more likely to make a TOC yes but we're not decisional we're just like a data point that the TOC takes into account when it decides so the without the prior doc Erin Boyd and I can't remember which SIG she's leading wrote this up as well if we are going to be like make a recommendation to the TOC what do we amongst ourselves want to have looked at right so that it's so that we can delegate this to a SIG member who would like be like oh yeah I'll go meet with that project and learn all about it and write something up and then everybody can read it and ask questions and be like okay yeah here's something about this project that makes us say yay or nay right like under what condition would we say yay or nay what is the sort of information we want to have thought about so so so yeah this is all like not exactly for Tuesday except that I'm anticipating on Tuesday they might say oh so what's your plan for evaluating these two projects and we'll be like so so we could go ahead that seems like there's some overlap with the self-assessment right especially on the architecture stuff like that right so like use cases yeah this is where I think we should take advantage of this being in PR there's like a bunch of this information there's information the proposal there's information in the due diligence later there's stuff in our self-assessment like it might be good to do like sort of in a big you like we should you know this probably be a different breakout but like is there parts of our self-assessment that are redundant with other things where we should be like oh this is actually comes out of your project proposal or something like that so that they don't have to keep regard to change the same information in lots of different places right but but like but I feel like we're getting into a conversation where we should like have a cup we should have a meeting with all the techniques yeah this seems very this is pretty meaty yeah maybe just for the Tuesday thing we should just be like upcoming right and we can be like there's a new process right upcoming and then we want to add like a project reviews proposals because also I think we want to invite anybody who happens to be at this meeting to chime in on the proposals right so we could do them a service by being like yeah we'd love everybody's feedback on the project proposals right and then you know that can be an input I feel like the decks one may be a bit controversial because of key club it's very very similar yeah um so we will get into that so so I'm actually about to my batters about to run out I think this is good prep for Tuesday um we can tag team the extra slides and I have like three conflicting meetings on that slot I'll try and listen to like two of them um yeah I mean I think if you can't come it's not the end of the world it would be nice but like I'm talking to you talk your ears could be burning all right well uh I gotta I gotta drop to uh thanks sir uh Brandon welcome uh great to have on you on board officially as a tech lead okay bye everybody all right bye