 Hey, cool. How are you doing? Hey, how's it going? Thank you excited to hear your presentation today Well, it's it's really the start, right? So it's the it's more of a call to arms and anything else, but it's really trying to gather people together so it can focus on this stuff Excellent Raises more many many many more questions than answers. That's for sure. But it's the start Hello everyone. Good morning I'll be facilitating The meeting today, as always, maybe we'll just give everyone a couple more minutes to join and then we can get started And while we're waiting for everybody, I'd love it if some someone could volunteer one or two people could volunteer to take some notes for today's session Much appreciated. Thank you. And I've also pasted the link To our Google doc. So please sign in and if you have anything that you'd like to bring up and discuss before we kick off the presentation We can most certainly do that. I don't see the link It's in the chat Do you see that in the chat? I Just see please sign in Sorry, I can see it on my view. Let me do that again. Thank you, Brandon Do you see it now? Yes, thanks for that We'll get started in just a minute. Okay I think we've waited for the customary three minutes. So let's get started. Thank you everyone. Good to see everyone again Today's agenda is I see I have an update that I'd like to just quickly it's very quick on the on the PRs and the tagging of some of the PRs. I'll talk about that looks like There are Brandon has an update. So we'll get to that. So why don't I kick it off? And then we have a presentation that will That will be delivered by Jonathan Meadows on software factory. So let's get started. So my update is from from last week's meeting there were I took up an action item to actually go ahead and tag Related issues based on the cloud native security white paper. There are a lot of efforts in terms of a webinar that we want to deliver as well as Mini and micro blogs that we are trying to plan out. So those issues are there. If you're interested in participating on them, please go ahead. But I've also actually I'd be lying if I said I did it. It was on my plate. But I think Emily has gone ahead and tagged it appropriately. I think that the tag is white paper. In fact, so so that's been done. So that's my update. And then moving down the list, Brandon, would you like to talk about the update on the landscape effort? Yeah, so a couple hours got together to kick off the landscape, the new landscape iteration just an hour ago. So I just want to make aware about the work that's going on there. There is a document that will be made available, but I will post the The issue. So if you're interested in getting involved with the new iterations of security landscape Do just just make a comment in there. Also, I'm not sure. It looks like we have a couple new faces. And that's It would be helpful if if you're new also to kind of we can do around introductions, but also in the the meeting minutes. If you can put down kind of the organization that you're with or kind of what you do. You know, it usually is helpful for those people that may want to reach out to talk about some topics. You know, thanks for that, Brandon. So maybe we should do that right now really quickly. Would, would our new members would you like to introduce yourself really quickly? I'll just go real quick. My name is Chad Mack. I'm an architect at Docker and I joined because Cormac told me to. Fair enough. I think that's, I think that's how a lot of us got here. Yeah, I'm Kara. Justin invited me to the meeting. I work at Clarvis. Chad and Kara, welcome. All right. So I think that rounds it up. So without further ado, Jonathan, would you like to take the, take the stage and talk about your presentation? Please take it away. Thank you. Yep. Thank you. So it's actually a couple of us presenting today. And this is really a presentation that focuses around a number of conversations a small group of us had over the last year or so. And I wonder, Andy Martin from Control Plane is on the line. I wonder if you'd be able to share your screen and we can go through those slides. Thanks, Andy. So this is really from a group of conversations we've had between Andrew Martin, Justin, Sabri and myself. And really it's a call to arms. What we're really looking to do here is to try and connect the multiple streams of work to address supply chain issues, find other collaborators and interested parties and really bring them together to address those issues, part of which is around the software factory. And as part of that, we're hoping to gain a shared understanding of best practice in this area, potentially creating a bit of a gap analysis of current approaches and then really looking to focus on perhaps a detailed white paper, perhaps a reference architecture and implementation that would start to demonstrate putting those concepts into practice. So this this deck really brings together some of those current thoughts. It's not necessarily the voice of our companies and it's far from a finished product. There's definitely more questions in here than answers. But it starts to map out our thoughts on the subject. And really what we're looking at here is a request for collaboration. So let's set the scene a bit and describe what we look at is from a supply chain perspective. What is the supply chain? Well, any exchange of goods can be modelled. You're presented now to sitting on the screen on top of the slide. Unless you're here. It's a bit a little bit inconvenient. There we go. Thanks, my beautiful assistant Andy has fixed that. I do apologize. So yeah, so any exchange of goods can be modelled as a supply chain. And supply chains exist to anything that's built from other things. Now, whether that's processed foods or pharmaceuticals or software, with the exception of the start and end of that supply chain, each link in that supply chain is effectively a producer or a consumer of a product. So for a traditional supply chain, such as farmer or food, we start with those raw components. We source them from specified locations. And we know what those raw materials are, where they came from. And we can be assured that they hopefully meet some level of standard. Those components then take into a plant for packaging or mixing. And then we hopefully assume that they conform to the standards of processing. And then they're ultimately transported in refrigerated trucks and such at specific temperatures onto outlets before we distribute them to consumers. Now, the software equivalent of that is more complex. So we have software created based upon many different dependencies and transitive dependencies. That software is then built and distributed over a network to an application repository, such as Docker or PyPy, before finally being deployed. And in most cases of vendor supplied software, there's really no detail on where that software came from, how it was built, what's in it, and it's unfortunately worse. So it's like having no list of ingredients on that standard product you end up buying. Open source software provides these details, but it doesn't really offer a reproducible or reversible path to understand if that compiled application was built from the same source code or to prove that our build infrastructure wasn't compromised and the binary that distributed isn't as expected. So as you can see, there's producers and consumers at each state of that supply chain. And importantly, any stage in that supply chain that we don't directly control or have trust in is liable to be attacked. And a compromise of any upstream stage is going to impact us as downstream consumers. So if you look on the next slide, we can look a bit closer at that problem. What we see and what we realize is really, we need to look at this issue holistically. We need to look at it from the start to the end of that supply chain. And that's why we're looking to bring others together to try and solve this problem. And it's really due to the producer-consumer problem. Effectively, Alice is consumer, maybe Bob's producer. So the entire chain must be proven to be sufficiently secure. Or we need to have an ability to quantify that risk. And that measure of risk can then be used to inform our security decision-making process. So as a group of the four of us over the last while, we start to look at this problem and split it into four focus areas. So the first one is the fact that software has multiple dependencies and transitive dependencies and often with limited detail on what's actually in it. And that situation gets a lot worse for closed source software. So we need to address that. Secondly, we need to realize that securely ingesting that code is difficult. And we need to figure out how we get an understanding of where that code came from. Like what are their transitive dependencies? And every time we include a dependency, we're effectively extending our supply chain and the security of it to the producer and take on their security posture and extending our supply chain. And that becomes really, really difficult to track. Thirdly, we've got to build and distribute that software, which is a very hard problem, but one we're really focusing on. And finally, we need runtime validation. We need to essentially validate that the software we're running is the software expect. And we need to be able to validate signatures generated by those producers pipelines, probably in admission controllers in a cloud native sense. So we have to solve all four of those areas holistically to really address and tackle this problem. But being things as they are, we were really focusing on the last two. And that's where we get into the software factory. Sandy, do you want to continue on the next one? Yes, I'm afraid I made some screen sharing errors. So this is actually the presenter view issue is mine. I'll have to just do the sideways. I apologize for the side view. So what solutions already exist in the space? Well, multiple different groups are looking at this. Firstly, we have S-bombs software bill of materials. This is a group under the National Telecommunications and Information Administration, and they define a standard data model that the speed X or SPDX. There is also a group working on D-bombs distributors builds of materials, which is effectively a mechanism to distribute S-bombs in a tested form so we can trust it in our build systems. There are there is existing work underway to securely build software. The Department of Defense in the US is really leading the way in this domain. They put a lot of work into this space and released a very detailed paper, which is the Enterprise DevSecOps reference design, which is linked later on, not on this slide, unfortunately. This creates a software factory, which is secure software delivery infrastructure that creates hardened pipelines from pre-secured and hardened components to build something that ultimately can then produce other software. So it's very much the sort of Russian doll turtles all the way down approach. And of course, thirdly, as this group is already aware, Justin Kapos and Santiago Torres with the Tough and Intoto projects are in this space looking at the verification of integrity of software all achieved with signing, essentially. One of the paradoxes of open source is that it does not come with a usage guide. And as such, we tend to see that generalized software, so something that's not specifically intended for one purpose but can be used more generally, is more widely consumed, whereas this means that potentially software may be used in a way that is not intended for. Whereas more specific software, of course, tends to be utilized in niche specific cases. Of course, we need to solve this problem for software in general, not the special case. And as such, when we want to ingest general, sorry, specific non-generalized tooling, we potentially have a problem here. So we would like to increase the reusability of well defined and security vetted products to get truly secure supply chains. And this means moving outside of our immediate domain and securing upstream. So it's not just securing our link in the supply chain. It is consuming all of the previous and then by extension, extending the trust that we have to our consumers. And so into a little bit of software factory. So this is not creating a factory to create software initially. It is creating a factory to create pipelines to finally create software. And we look at a lower level proposed model for this in a couple of slides. We're leveraging heavy infrastructure and security as code, because these things have to be well tested and reproducible. And the aim of the factory is that it can be configured to build multiple types of software. And whether that's infrastructure components, building a platform, security testing, or actually standard CICD, take in code, test it and release an application out of the other side. There must be a high level of security built into a software factory, because it is an inherently trusted component of the build chain. And we also need to maintain the confidentiality of any signing keys. Because of course, if malicious actors gain control of those keys, they're able to sign arbitrary read bad software and inject it into our chain in a trusted manner. So as with most build chain software is committed to Git, it is subjected to automated procedures, choke points and quality gates. And at those gates, we have the ability to enforce policy as we see on the diagram here on the left. And there's the link to the reference design. Very strong recommend for read. It's very approachable and has done a huge amount of work for us in getting to this point. Very good. So of course, what we're doing here really is just standing on the shoulders of CICD giants that these are not revolutionary concepts. It is just a trusted deployment and propagation mechanism for the underlying infrastructure. And of course, preventative controls mean that we are shifting things as we all love to do left in the pipeline. Detective controls are run against the deployed systems to the right of the pipeline, and then we bookends the security and deployment infrastructure, our trusted delegated automation infrastructure with, again, standards and known tooling. Notably the skill to achieve this level of automation and also almost sort of recursive or self-referential or self-referential automation is significant. But as with DevOps, as with DevSecOps, etc., the point here is about bringing software engineering rigor to automation and operations and security in order for small teams to manage far more resources than traditionally they would be able to. Some further considerations. What we've been discussing as a group is really centered around the chain of trust here. Of course, thanks to everybody who's been doing work on this for years and presenting to this group for numerous years as well. The question is how do we attest that the work at each stage of the pipeline, how can we attest to that work at each stage, and also attest to the ultimate output? So for clarity, that is each individual build stage must have a cryptographic chain of trust internally, but then externally our final artifact also must have some indication, some verifiable mechanism that we have built it, that we trust it, and therefore the people that we have a trust relationship with can verify that when they go to consume that artifact. Currently, we're looking at a proof of concept leveraging in Toto, and as we've dug into that, we've also investigated the bootstrap requirements for this system and this trusted components. It is worth noting as borne out by recent events that a successful supply chain attack can be hyper tricky to detect as the consumer already has an inherent trust relationship with its provider. If a single provider, as John mentioned, is compromised, the attacker may target one of end downstream consumers and we get ourselves into sticky situations as we have seen. So inherently, the use of open source, the use of software, the use of anything that we exchange, that we can model as a supply chain as humans, requires us to accept some inherent level of risk and making sure that reasonable measures are in place to ingest software to detect malicious software and then securely send it on is imperative. So this is not a replacement for a hardened pipeline, it is packaging and distribution mechanism for the hardened pipeline itself. And as ever, we assume that no control is entirely effective and we are running intrusion detection systems all over the place because frankly, we expect to be compromised in some shape or form. Compromising the intrusion detection system is left as a thought exercise to the reader perhaps. Okay, so here is our proposed proof concept design. A high level view of a theoretical software fact, I do apologize, that was a very quick restart, that's the second time Zoom has crashed on me today. Let me just bring everything back up again. So yes, what we're actually building here is the infra build environment there. I presume, yeah, okay, I assume that's all, no, it's not sharing it, excuse me. Yeah, so what we're building here is the infra build environment. So this, the second box, the second major box on the right. And so a theoretical sample project, we're using the software factory to construct this infra build environment for a theoretical secured project. So we have a laundry list of inputs to the system on the right, on the left hand side here as we look. The most, the one we'll focus on at the moment is the Spiffy Svid. So this is the bottom turtle Svid to use the terminology that Cytale published in their book. It's really an excellent read. Again, there are some contributors to that book in the SIG. It's a great piece of work. Even if you just want a simple way to describe cloud native systems, the glossary at the back is really excellent. So a strong recommend. And we're assuming that Spiffy Inspire as sort of long term entities in this group are known and we don't go into too much depth for this presentation. So what do we have? We have a Spiffy Svid, a Spiffy verifiable identity document. This is an identity that we can use to identify ourselves, frankly. What we do in the top box with our remote support services is we enable Vault to be Spiffy aware. We do that by using a Spire authentication plugin to Vault. And this means that we can externalize all our secrets and exist just with a workload identity that is linked to a trust domain. So we also know what the identity is. We know where it came from and we have a domain in which it is trusted. Spiffy Inspire Support Federation and an extended version of this concept, which becomes far more interesting in a theoretical future version of this presentation once we're past getting our initial proof concept together. So that initial secret is input to an infrastructure's code deployment, which stands up the first software factory, TCB1 in some trusted environment. So what we're seeing here, we're taking all of our inputs. We've got a make file here. It's an arbitrary task runner. Make is great. Your mileage may vary. And then we build this initial trusted compute base. It's just a Kubernetes cluster. It's probably running on someone's laptop. And there is nothing particularly special about this. That trusted compute base will be used to build the next software factory and then terminate it. So it's an ephemeral build environment using the same, I say infrastructure, the same workloads and the same images. So the same container images are used locally, as used in production. So we have a continuity there and we're not everything is treated at the same trust level from that perspective, because of course, we're actually proxying through key material and or if not key material identity, that can be used to retrieve key material. So we're trusting these as highly secure systems. Okay, so trusted compute base one is a Kubernetes cluster, which is in this case running Tecton. Again, it just needs a build runner, but Tecton is a direction of travel that we're interested in. It also includes the Tecton chains project, which takes Tecton task runs and converts them into in total link format. Why is that useful? Well, this is our inter build step verification. If we have a higher order tool or just a build service that is signing those individual stages, we can then re-verify them later. When the artifact is constructed, we take those individual pieces of link metadata, we use in total to verify them. And we ask the question, was this container or this artifact subjected to G process? Did it pass everything? And was it signed by somebody that we trust? This is a component of the supply chain security trust story. So what have we done there? Well, we have taken all of the dependencies needed to build the info build environment. We've stood them up in the trusty compute base and we've run a build job that then stands up the info build environment. This is pulling secrets from our from our vault server, say pulling secrets, minting these certificates as appropriate. And finally, a productionized software factory has been built, the info build environment one. This software factory is not used to run automated deployments as the previous software factories were. It is also capable of building software. So it's tooled up with the appropriate pipelines to take a Java repository or a microservices galaxy or a monorepo or however we configured those things in tecton. And it can build these application artifacts as well as supporting developers and operations and therefore creating trusted artifacts. And again, we're into this build stage signing with a trusted build. We finally have an artifact that we can trust. The obvious choice here is notary notary v two is under active development. With all of as with all of this presentation effort is required across various domains to get to a proof of concept. We're conscious that we are relying on things that are under development in many of these cases. But they're all things that we have strong confidence in. So how does this address the supply chain problem? By signing and verifying every artifact and action that the system performs, everything produced should have a verifiable signature in the trust domain. And thus consumers of the software factory are able to assert that the output artifacts are created, tested and verified before the artifact is distributed. And then of course our trust but verify as possible when the producer consumer relationship when we're the producer and somebody downstream is consuming the artifacts that the software factory has produced. There is a secondary problem of compromised build infrastructure. What happens if somebody doesn't get the signing keys, but is able to tamper with some of the build step containers or is able to tamper with the source code and some circumstances in Toto has a pattern for this. So it's actively deployed in Debian repos. If you have a Debian installed, you'll be using packages that are reproducibly built and signed within Toto. Same for the Pi Pi registries. There are God knows actually tens of thousands, if not more packages distributed and signed in this way. The way the compromised build infrastructure is dealt with by Toto and those projects is by running builds in parallel in geographically distributed and isolated environments. So the supply chain attack is only the source code because those environments all have their own unique to some definition there of supply chain. If one is compromised, it will create an artifact with a different hash, a different cryptographic proof if you like. And when they're compared across multiple different build infrastructures, we can determine either there's a nondeterministic or non reproducible build or something is up with the actual build infrastructure itself. And that provides a robust distributed failsafe for that particular attack. Of course, if the source code is compromised, well, all bets are off. That in itself is a different problem. And there's some interesting discussions going on in the developer identity working group about if we can ever actually link those things to one human. But I digress. So a few more things. The keys for various activities because we need signing keys all across this infrastructure would reside in vaults under this model. To sign the individual build attestations. But we do need a longer live credential to verify those things at runtime or later in the build. There's a question of using long live credentials and build attestations that we validate in runtime. There are pros and cons of each approach. And we're looking to undertake a threat modeling exercise to try and get to the bottom of this. We have ideas, we have thoughts, but we will do a lightweight and then a more formal threat modeling attack on this at some point in the future. And I know that a couple of other people on this call are also looking at that same problem. So it'd probably be good to get together on that one. We're still trying to figure out which sort of credentials, which would use to sign those build attestations. It's a little bit complex. Hence the threat model that we're going to put together. We're conscious that there's a lot there. Is it a good time for a question? I'm close to the end. There's only a couple more. I wait, no worries. Thank you. Thank you. So just running through the theoretical types of pipeline. Again, what does a software factory support? It's just CICD. The answer is then flexible and unbounded almost anything. We can build application pipelines as we know and love. We can build our infrastructure's code pipelines because fundamentally we're just running stuffing containers. Nothing changes here from running in your build environment of choice. And of course, we can also run our securities code pipelines. Now these security pipelines are generally not going to build artifacts. They will generally produce test evidence. We can then at least sign that test evidence so we have some more confidence. But really the infrastructure and the security pipelines are of such extreme levels of trust to our organization. If that infrastructure's code pipeline can deploy stuff into production, it is a useful backdoor, let's say. And the same for the network routes that are likely to emanate or link to the security infrastructure and testing infrastructure. So whether these are co-tenanted on the same software factory instance or whether multiple exist or indeed because of the highly replicative nature of the thing, software factories ultimately should be treated as close to ephemeral. They are almost throw away because we don't need them and we can reproduce them so quickly. Then it's a question of where they're located in the network rather than long-lived infrastructure which is then attractive and available for compromise. One of the pretexts to this is a thought exercise. What can't be containerized? And ad nauseam here, we think that potentially almost everything can be containerized. Some things don't make immediate sense. For example, shipping some configuration. Maybe that doesn't quite make sense. Maybe our organization would prefer to be more traditionally used a git repository for that. But if we can force everything into a container, and I use the word force purposefully, it gives us consistency, a lack of duplication between different types of thing and ultimately, this is better for security. We know that we can run consistent tests against everything. What do I mean by consistent tests? I think minimum viable cloud native security is just container scanning. CDs have been published. We've got correlation data. We should do it. It's practically free. That then by extension should be applied to almost everything. It's kind of the core OS or container Linux theory. Again, taking a slightly ad nauseam. While I say this is a thought exercise, this is very much how we have been running some of these things for a number of years, especially around the infrastructure's code stuff. There's one thing that's more notable here. The container external image type is an internally consumable third party or open source artifact. It could be Jenkins, Nginx, Vault, Sonar, GitLab, Matmost. There's a huge range of utility in those different applications. How does that get to the software factory in a trusted way? Well, it is ingested into the organization and scanned with the same set of tools internal code is subjected to static analysis for actual source code, notwithstanding, depending upon the compilation state, and built into a container image. Again, the benefit here of taking everything as a container for standardization and reuse purposes becomes clear. However, the complexities of consuming code by ingesting it into an organization is a super difficult problem in this domain. It's beyond the scope of this presentation. So we're modeling the producer-consumer relationship. What happens if we have an anonymous producer with a binary compiled obfuscated artifact that we need to get into our organization? Of course, that is a huge and not intractable but significant problem. We're choosing just to look at it from the producer-consumer relationship at the moment with the software factory. But of course, by extension, the ultimate goal would be to extend the software factory in both directions from its point in the supply chain so that it's able to verify its producers, again, to the source by source I mean to the initial producer and all the way down to the final consumer. Here is a slightly closer view of the container app software factory build type. And the point here is that most everything ends up being subjected to something a little bit like this. So examples, let's just a microservice, maybe it's a cache, maybe it's an admission controller for the purposes of what we're looking at in the static security and policy tests. This looks a little bit like GitHub's super lint, for example, which will just lint the living daylights out of everything it can discover in a container. Then we've got a suite of other static analysis tooling, stuff like GOS, which is actually testing the file system in the image, stuff like Conf test and OPA, which is testing the source that the image is built from, Trivi, again, the compiled, the constructed container, but also GPG validation. If we want to extend trust internally to our domain, so in our organization, into our Git repository, well, human identity in GPG is more or less the only way we're going to achieve that at this point in time. PKI is difficult, but it's the best we've got at the moment. Again, commit conformance, scanning for secrets in Git, we would probably want to do these things to anything that came into a container. There is, of course, specificity around how individual languages test. At that point, we would look to defer to build tooling of some description and while we have had discussions about uniformity of interface, so a higher level DSL that is approachable to developers in inverted commas without laughing at myself, yes, it's almost the XKCD standards question, but the question of a higher order DSL, maybe something above Tecton that provides uniformity and basically means developers don't have to do much thinking. Under the hood, all of these things are run without exception based upon the classification of the type of thing that's coming in. This is very much part of the software factory, but in terms of the bootstrap of trust that we've been talking about for the majority of the presentation, this is just a post script. And I guess finally, again, attempting to use the same controls for infrastructure and security pipelines as application pipelines, just deduplicates our efforts. These systems are highly privileged. They have access, at least read access and sometimes write access to all source codes in the domain, let's say data and of course infrastructure when we're talking about getting these things into production. It would be a prudent use of dedicated build infrastructure to actually build the build steps that go into Tecton. So again, all of this stuff is standing on existing thinking, if you like. And one of those things is have more than one build server, frankly. And that just about takes it to the end. I'll pass back over and thank you for listening. Yeah, I mean, really, a lot of what we've gone through today is really just our current thinking, right? I mean, clearly, there's a huge amount of still open-ended questions. And what we're really trying to do is just get people together and try to collate some of those best practices so that we can start to work on those next steps together. Because I think a number of people are trying to do these sort of things separately. And it would just be, I think, beneficial to the community to try and collate some of those efforts and see if people are interested in doing that. And perhaps get to a point where we can define some of those standards or policies that we perhaps implement during the supply chain to make it easier to build upon. And I do need to call that again, the work from the Department of Defense, that DOD platform one team, it really is excellent. And I think it's really just looking at providing perhaps a reference implementation and standards to work through that that we can then extend. So as Andy's suggesting, definitely building on shoulders and giants in that regard. So Vinny, I think you had a question, probably up a quarter of an hour ago. Great presentation. Thank you so much. So I mean, I know that this is a work in progress and it's always evolving. I was just curious, as to, you know, you talked about these artifacts that go from, you know, one step to the next and one of the consumer and the producer of the other etc. Do you have a sense of, you know, what those verifiable items might be? I know one of the first things that comes to mind is obviously a container image taken atomically, but what else do you consider as you can actually take it from one step to another through the process that you define to actually make sure that each part of that is verifiable? I'm just curious, what other aspects are there? What other asset types do you have you identified? I guess it's a question of what the libraries can work with. So in Toto, it's basically software, generalized inputs and outputs of text and base data, well, of data really. In order to kind of constrain the complexity and reduce the headaches, we're just thinking about just doing it with containers and then trying to squeeze as many different things into a container as possible. We're keenly aware that legacy applications will not do that without some kicking and screaming. Is that other kind of things you were thinking of? Yeah, no, no, because I think if I, maybe I didn't understand, maybe I took it more broadly, but I thought you also alluded to the fact that as one of these assets go through the different steps in a build pipeline, if you will, but you're also able to verify the consistency and the security or some kind of verifiable aspect of the environment itself, because as you call it, I forget what TCB stood for, but the TCB and then you went from the build TCB to a running TCB to the actual environment. So I thought that and it would be great, maybe if it's not, it's not and I understand that it's evolving, but you also had some kind of constructs to actually verify each of those environments in addition to just these container images and the software artifacts that they wrap. The short answer is not immediately, but essentially by, it kind of comes down to what's the root of trust and what form does that take. So it's key material of some description. If we kind of standardize as Kubernetes does on X509s, then things are signed back to the root of trust actually in an X509. So we have the spiffy concept of a trust domain, which at least allows us to verify not that things aren't compromised, but that the signing keys came from somewhere that we trust. A kind of meta S-bomb of each stage is a nifty idea that I haven't really considered. Something that can help there with that is you want to be careful that people don't just take these certificates in the trust chain and implicitly trust them. As a cell, libraries tend to be very careful not to do this, but when people start to roll their own code to implement this stuff, we have to be very sensitive that people will likely take that approach. And there's a couple things we can do to help mitigate that. One of them is, it turns out that with public certificates, with RSA, that you're able, in the same way that you can encrypt something with a public key and encrypt it with private, the opposite path also works. You can encrypt with a private key, decrypt it with a public key. And so if we were to put in the fingerprints or some other thing that we could use to identify each parent and use that to decrypt the chain up, and that would allow, and you'd rather start with the fingerprint of the CA and work your way down the chain until you get to the leaf, then that means that you already have to have, you must go to your chain, to your trust domain in order to decrypt the first element in your list. That way that you can then start to decrypt layer by layer until you get to your final certificate and validate it. But we do have to be very careful that we don't establish a practice where people can buy through lack of knowledge or through making a mistake that they don't inadvertently put themselves in a negative situation. I have some more literature on this that I can write up and put into a mode that I can share in this in a future time. It's very short on how this works. Yeah, so this is Fessel just to add one thing to this point about digital signatures as well. I think we need to differentiate between when digital signatures are needed and when we are using code signing based certificates, right? Because with code signing certificates, time stamping servers come as well, where you can basically go back in time and verify whether the artifact was, when it was built, like five years ago or two years ago, was it valid at that time or not. So I think there are a few angles here, of course, within the X509 domain. One is the digital signature one and the other one is also the code signing certificates. So the signing with both of them and how it is handled is different and we need to identify in this software factory that what kind of signing we are using and where. Image signing when you are doing is one use case, but people are also looking towards signing the configuration files as well because now we are seeing with a lot of YAML going around, right? I mean with different things. People are exploring or kind of interested in knowing if you could sign these YAML files as well. So there are a lot of aspects around signing and yeah, if you guys are interested, I would want to contribute on that as well. One thing that I do want to emphasize here is when we see, during the call, I haven't gone through this whole presentation honestly, but there was mention of Vault, right? Where all the signing keys will be stored or something like that. I think this link has to be kind of generic in nature so that if because if you are going towards big enterprises, they will already have their existing HSM infrastructures will be which will be kind of backed by already kind of custom walls or they will have their own implementation of walls. Completely. It's a Vault, not necessarily the Vault and it's really you're absolutely spot on for Zal. We're really looking at how we're signing the different artifacts throughout that chain, where we're getting signatures, where we're storing it and what we're signing and frankly we're just looking at different approaches to try and see which one's actually going to fit. So I think it'd be really interesting to continue our conversation with you. But to your point, yeah, it's not, as with other things within this, it needs to be generic, right? So a generic store, possibly some form of HSM, a different build pipeline, possibly not tecton, possibly anything else, frankly. But we just needed to use one as a reference architect to start with. Okay, great. Yeah, and I would have to contribute on this code signing aspect or digital signature aspect. So the signing part is that I'm really interested in and if you could continue discussions in the future, I would really like to contribute in it. I will go through the DOD document as well. But I do have a background in code signing. So yeah, we'll take a look at it. Thank you. Thank you for the presentation. The presentation is great. I think this is where everything is going and yeah, a great presentation. Thanks, guys. Well, nothing on the code signing just to add to this. It's also good to sign certain types of metadata about it. Like if I scan it with a certain version of a code scanner for security problems, I want to know what version I was scanned with and when. So that way that if I release a new version, I can actually add into the metadata and policy around that information about what is in scope or what's out of scope of my policy. So I give me the ability to force older versions that don't protect me to do sunset that over time and make them fail policy as the system evolves. Anyways, I just wanted to provide it. This is 10. Sorry, Steve, you had a question? Yeah, so Justin and I will work on the other V2 stuff. There's so many people who I'm not sure if anybody else is there too. One of the things we've been debating is the X509, is that okay to require X509 or do we need something else? So I heard a lot of X509 conversation here. I'm curious if people feel like that's too limiting or has that been good? What do people feel? So I want to comment on this thing, right? Yes, if you are talking about open source software or you're talking about individuals, whenever you will mention X509, people will get a bit on the backside. They will say, okay, we need a PKI infrastructure. But if you are selling anything or if you are building something for the enterprises or big financials or big pharma companies or big insurance vendors, they already have pretty established PKI infrastructures. So we cannot say to, I mean, for them, if you mention that, okay, your thing worked with the PKI, they will actually be more interested in it because they would say, well, we already have a PKI established throughout the organization. We have kind of figured out all the standards, all the best practices for it. And if you're secure factory kind of integrates with it, right? For them, it is a plus point, right? For individuals, it will always be tough because I mean, for example, if you ask me for your open source project, establish everything as PKI, for me, it will be difficult. I will go for GPG keys, but enterprises will prefer PKI. So yes, I do want to emphasize PKI is a thing and it is already established throughout the enterprises. So just to, yeah. It may be possible to consider having some metadata so you can specify what you're using. So you could say whether it's GPT or X519 or something similar. Because then that means that the problem that you're running with PKI is if you're running in a large enterprise, they will almost certainly require you to use PKI if it's available. But if you're an open source project, you may want to use GPT keys that you've published. And the problem that we run into with PKI and the open source path is that PKI assumes that you have a route, you have a shared thing and everything falls under a specific tree, which is true in major enterprises. And even that is available to some people. But once there's no federation or rather the federation is not as simple when it comes to the PKI approach. And so being able to specify or possibly even sign multiple times, like I can sign this with a GP and PKI key, that would give me the ability to pick which one I want to use based upon my needs. And that does not require me to pull in the CA of some third party organization that I may not trust, which I may only care about. Did the same get produced with the right set of certificates through GPT. But anyways, just some thoughts. If you're in the planning sessions of that, like I would love to join in on those issues. If those are still, or if those are still needed. Completely. Yeah. Thank you, Frederic. Chen, please go ahead. Sorry about that. Hey, this is Chen. I have a question in the room channel window. So the question is, I know that people are like, the question is regarding how, how we use this to be the ID. So I know that a lot of people are generating generates the specific ID during the runtime. For example, after the node attestation, and then walkload attestation and mount the certificate into the container at runtime. But, but if I'm not looking at the slides today, it seems like we, you know, during the building procedure, we're generating the specific ID as with ID and put it together with container. For me, like this is a different usage. It's like one is during building period, we generate the as with another one's like, we generate during the runtime. So what would be the suggest way? If anybody with better spiffy or spire specific experience wants to jump in, please just do interrupt me. But ultimately, the spiffy published this book called Solving the Bottom Turtle, which I've just dropped in. The form of attestations that the spiffy can spire, which is an implementation spiffy can do are based upon what it trusts. And we can define our own things. A hypothesis here would be perhaps one of two scenarios, but maybe there's a disaster recovery scenario, or maybe it's just building out more infrastructure for a new project. But there would be an attestation based upon the human identity. So I know we've got workload identities here. Maybe we would link it to a physical device. But that has to be that initial trust relationship. And we went through a number of different other options while looking at this and really spires is the closest we can come to something that's well standardized and battle tested, basically. At the end of this book, it's got a load of interesting use cases that may be reaching the limits of my expertise, which is expertise knowledge, let's say. I've done a little bit of work on this and a lot of thought over the past few months. And I think it's that each of these steps in a build process, there's an agent that's performing that build process, maybe it's your goal build. So there is a process that we can identify with that, with the shot, some of that container that's doing that. And because of that, what Spiffy does is give you that identity. So that agent is then given that NPE, that non-person entity identity by Spiffy. So then we can use that within each of the build steps to say, hey, each of these agents provided these raw materials to this end product, right? And I think what Spiffy does is it gives you those short-lived certificates. So you can say, hey, these raw materials will spoil if they're not put into the end product within this period of time. And I think when we have the end product, that's where we can do the final signature using whatever we need to based upon a threat model. Maybe that's, you know, your PGP key, we're using tough, or, you know, you have a hardware device that's signing it, but that's going to probably be more organizational policy than anything else. I think, if, go ahead. No, go ahead. Cool. Please. Yeah, but I think it's that concept of these raw materials and ensuring that they don't spoil and understanding what goes into that raw materials. And that final, that final attestation, we look at all those raw materials and say, hey, do we have a trace on the build to make sure that there's no debugger attached to it, or virus definitions within certain date, right? So then we can look at all that and then give that final attestation step and then publish that to our infrastructure. And that's the bit that I perhaps didn't explain particularly well, but that's the bit we're trying to threat model and get our heads around. Because at the moment, we're using it to effectively bootstrap trust for the pipeline itself so that we get the understanding of trust. We look to potentially using that to sign the actual artifacts. And then the conversation, as you're suggesting, is do we use that in a long-lived credential to sign the ultimate build artifact? And we validate that single credential. What do we have an ability to look at the intermediate signatures for each one of those build processes? I think there's pros and cons of each one of them. I think you've, you and I have had a conversation about this separately. I think that's definitely something we need to dig into and find out. Because I kind of want to threat model it and see what the implications are. Because at the end of the day, if you've still got a single point of exploitation where you just hack the final signature, do you get the additional benefit from using those individual signatures at each individual build step? I think there is still value there. But what's that trade-off? Sorry. Two things. I wanted to say that we are almost at the top of our time. And I think that we would not be doing justice to the topics at hand. Thank you all for such a great presentation. I don't know how we could have you all back, maybe on another meeting to continue. There were so many good topics that I think weren't a double click. And somehow, Jonathan, if you could take the lead on figuring out on your ticket 501 how the different folks could collaborate and sound out and where the landing spot is to provide some of our inputs, that would be fantastic. Sorry to cut everyone short. I'm just trying to be conscious of everyone's time. We're at the top of it. But if it makes sense, maybe Justin, we could collaborate and see how we can have the folks back on to continue this discussion. Absolutely. The aim is really to have more collaboration. So whether we structure it as a working group or some kind of, let's put it on the ticket and we'll carry on from there. That's right. This isn't an end presentation. This is a coat of arms to collect collaboration. So we're happy to work with Justin and the rest of the team to do that. Yeah. And we can schedule something in the future as well. If there's enough people that want to participate in the discussion. I just want to make a really quick announcement. We have a presentation next week, which may be interesting to some of you on this call because it's also related to signing, which is a public ledger for supply chain called RECO. Basically trying to mimic certificate transparency but for supply chain metadata. So that's all announcement I had for today, Vinay. Do you want to close it up? Yeah. Thank you so much, Brandon. Look forward to seeing everyone. Great conversation today. See you next week. Thank you. Cheers. Thanks, everyone. Thank you. Thanks for the prezel. Cheers.