 So this is sort of a eclectic talk maybe about my experience with supply chain security in Anaconda, which is a company that I've been at for four years. So here's me looking dead inside. I'm Sebastian Awad. I've been working specifically on software supply chain security for about a decade now, mostly on the topic of trust. I promise that I will point to the projects I've worked on and what I do specifically, but there's some stuff I want to address first to define the talk and the space it occupies. I promise I'll get back to it. So a lot has changed in the last six months. The co-author of the prior version of this talk, who is a wonderful human being and my colleague in several different ways, Dr. Preston Moore, has moved on to a high paying financial firm to correct the deprivations of what our academic incentive structure has put him beyond that the last 12 months have been a wild ride in the security space and in the tech space. So I've made a lot of changes and I want this talk to instead be discussion heavy. So please ask basic or advanced questions, technical or not, please interrupt me, please disagree and express confusion, ask for give advice, feel free to share stories and I'll do the same. So first here's the skeleton of the talk. Right, so I'll talk a little bit about the user perspective on software supply chain security with four essential problems. Then I will talk on contract trust in particular, which is more of what I focus on. Trust architecture is the update framework, chains, bootstrapping trust, that stuff. Then software supply chain security of build chains and software supply chain security as architecture, then as a team sport, a little bit about systems complexity and possibly opine on order and chaos in the field. Okay, so first buzzwords are a blessing and a curse. So I'd like to break down what I mean by software supply chain security and by trust. And if you feel differently, please yell at me if you want. To be crass about it, from a user perspective, the central subject of software supply chain work is something like this. Why is it okay for me to run this code? Is this thing okay? So let's break this question down into what code is desired, how to acquire it and how to trust it. So that means roughly these problems. So we have project discovery. So in principle, what code projects might I want? Project assessment. In principle, are the projects that I think I want okay to use? Code discovery, where do I get the code and code assessment? How do I know the code I just obtained is what it says it is. We solve these first two problems in ways that usually involve a lot of soft things. Education, thinking and social interaction. What tools have people I know heard of to help me analyze this data? There are interesting problems and there is developer education around these. Companies are constantly trying to make these easier for folks to establish market share. So for the question of ongoing project assessment, SBOMs and security advisories are of course critical for determining whether or not a project is something that you want in principle. The third problem of code discovery is interesting to me and it's also a very dangerous place. How do you know what the official repository is for something? Whether it's a repository that you should trust, typo-squatic is a well known part of this problem. You type in something you expect and you get something different because you don't have the right URL, the right URI. So probably more than half of my time in supply chain security has been focused very specifically on the fourth problem. Suppose I know I want TensorFlow or Django or whatever. Suppose I even know where I'm supposed to get it. I know the project name in a Devian repository or the project name in PyPI or the GitHub repo URL or the App Store link. Suppose all that. I grab some code. How do I know that what I wanted is what I actually got? How do I know that the stuff that I found on the Internet is what it says it is? In other words, is this code catfishing me? I think a good phrase to refer to this problem is content trust or code trust. It's the ability that consumers of the code and data to treat the code and data that they receive, to trust rather, the code and data that they receive from some other location or service. So put another way, this is kind of data authentication, right, from the user side. It's the question of how consumers can trust the packages or content they receive. It is deceptively easy sounding from a distance, but our time has shown us, and this goes back before in a condo for me, that this is surprisingly complex. This is the domain where the update framework, SIG Store, in Toto, Docker Notary, Git commit signing, even to an extent TLS, and to some extent Salsa and in Toto, and lots of other efforts live, is how do we ensure tamper protection and things like that, right? So a quick aside on what I do now that we have that in the content trust space before continuing. So my focus in this trust category is mostly thanks to my time at the Secure Systems Lab at NYU, where I worked on some cool projects, the update framework, or Tuft, and Uptane, which is an automotive adaptation, and a bit on in Toto, which is sort of, I guess you could say it's a predecessor and slightly broader project maybe than Salsa. Don't add me for that, that's imprecise. So I spent time writing reference implementations, thinking about fun edge cases and standardizing and trying to convince automotive industry folks to do nice things, which was surprisingly successful somehow. So I think as a disclaimer, the phrase content trust was probably something Docker Notary used first, but in fairness, Notary is based on the update framework, so maybe that's fair game. So for almost four years now, I've been working for the condo community and also at the company Anaconda. I went there to put into practice some of the cool stuff we developed at the Secure Systems Lab. For context, here's a blurb about Kanda and Anaconda, so that I don't antagonize some of my coworkers. Kanda is an open source multi-platform package manager and environment manager. It is an open source project. It is at its heart multi-platform. It's a community project. It has a governance board and rules with some protections against corporate capture. More recently, it's become sponsored by the nonprofit NumFocus. It is most often used by folks, however, who work with Python, even though it's language agnostic. You can install all sorts of sophisticated performance libraries written in a variety of languages. Kanda is often associated with data science communities, science, finance, numbers, everything, but you can also use it to install Django or QTE. It's a little like PIP. It's also a little like apt, plus environment management. So beyond the package manager and environment manager itself, the Kanda ecosystem is sort of a broad association of related tools and repositories under the control of lots of different projects and nonprofits and companies. It includes lots of open source projects and things like Kanda Fortune Mamba, which are broader initiatives. The Kanda community includes roughly 30 million active users in any given month. The vast majority of them are all using this for free. These are the folks that we're working on protecting with what we've been developing. And to make Yanis happy, Kanda is not Anaconda. Anaconda is a company. Kanda was originally created there, and I think still most, but not all of the maintainers of Kanda work at Anaconda. We hope that will shift further over time toward more community folks. From my perspective, though, the heart of Anaconda's work is in its build pipelines. Anaconda builds open source packages for a wide range of architectures, and it makes them freely available. We host curated repositories of internally built packages. We produce S-bombs for those. We also produce sort of improved security advisories for those of us who are not particularly content with, well, what's available on certain repositories. Anyone can upload their own package builds to their own channel on the .org community repository, the Anaconda.org community repository, sorry. Each channel is kind of like a multi, mini repository, and users can find your channel and decide they want to trust it or not. And I don't, I say most of this not as a means of trying to advertise for Anaconda, but rather to capture some of the complexity. Because if you're paying attention there, I mentioned quite a lot of transitions of control from one place to another, and a lot of different pipelines. There are other things at the company, you might be familiar with PyScript, stuff like that. So I have a kind of neat job at this relatively small company where almost everything I think about is supply chain related, and a substantial portion of it is around content trust. I'm not sure how common that is right now, but our interviewing efforts regularly show me that there are not that many folks who get that chance. Anyway, like many of us, I basically work for both. So with a bunch of specifications and reference implementations and proposals, like PEP 458 and 480, in hand, I moved from a lab to Anaconda to make package installation safer for those people. Success and failure have ensued, of course, I think I'm happy with my time. So mission one for me was sort of direct application of what we were doing before, or attempt and direct application. Content trust for users of Conda and the Conda ecosystem. This is basically code and metadata verification to provide tamper protection authenticity. It's easy to think of this as sort of a signing task, but I think the most interesting task about this is actually a trust architecture and trust bootstrapping, something that operating systems get to some degree for free, but groups like us don't because we distribute installers. So we'll talk a little bit about responsibility separation delegation, compromise resilience, trust architectures and chains. So a particular content trust feature that we've released for Conda is called Conda signature verification. It has to be turned on, it's not yet on by default. So the work here is broader than this, but I think it's a good place to start. So in principle, this is about preserving the install integrity for end users by verifying the package data using a chain of trust. So this brings us back to do I trust what I'm installing? The question that keeps us up at night. Man in the middle attacks that involve malicious mirrors, diverting user traffic, et cetera, are often fairly easy to execute and you can yield thousands of compromised systems through a mechanism like this, even in a smaller repository. Sometimes things that are things are even more unsophisticated. You might be surprised even now at the number of folks that you can catch just by typo squatting on a public repository. We used to intro talks like this with this big slide of logos and ask what they had in common and then reveal shock that it was that all of these folks that had their software updates and package management taking advantage of to compromise user systems. Nowadays, over the last four years, I don't really have to scare folks anymore like that, which is nice. Those slides have gotten rusty. I don't even have them in here. But suffice to say that most of the big interesting cyber attacks you've heard of recently, including, maybe not so recently, including like careening jeeps across the highway in remote control have involved breaking parts of the software supply chain and package updates to get malicious code in the user systems. So let's see. I guess I will, let's skip this. So let's go through just sort of a conda install command. So you tell conda install a straight, which for the life of me, I actually don't recall what a straight is, but that's fine. It fetches from some repository, wherever it's configured to point to the repository data. This is things like the list of what the packages are that are available on it, the artifact, the particular build artifacts versions, all the build information, hashes of all the individual artifacts, and critically dependency lists for each individual artifact. There are strings to specifying what dependencies are for that particular artifact. After that, we have a thing called a solver. If you're familiar with satisfiability solvers, it's a pretty cool problem, but it's kind of niche that looks at the dependencies and pumps out basically a solution that says, hey, if you want this package, you will, you should install, you know, I'm gonna, I'm gonna prefer the latest version of the packages. I'm gonna look at what you currently have installed in your environment. And I suggest that you or install these extra packages, and then you will have what you need based on what you told me, which in this case is just any version of asteroid. But obviously it can be more complex. So after that, the solver, which has consumed this data, gives you this list. In this case, I guess it's asteroid. It's I think a very old example. So a particular version of asteroid, and a particular version of a dependency thereof. So solver does that, you know, maybe it issues a request to the user asking, hey, is this okay, depending on the configuration? And then condo fetches the actual packages from that, you know, same repository based on, you know, a determination of which channels are, you know, preferable. Then you can install, right? Simple case, happy path. Now let's consider, you know, a set of attacks that I think was made more famous in like the mid-2000s. Just a basic sort of set of man in the middle attacks of the sort that used to be surprisingly effective on Linux package managers, but which today happily is not. So, you know, maybe you have a TLS misconfiguration issue, or it's just off, right, which is a thing that people do. Maybe you have some, you know, evil certificate authority certificate, a bad one installed, there's been a server compromise at the repository that you're at the repository that you're contacting. These things happen. So one way or another, you have this malicious repository now. So you make the same install request to condo. Condo fetches repo data from this repository. That repo data goes into the solver, let's say it's malicious, right? It's fancy and red. And the solver can say something like, install this really old, I don't know, maybe extremely insecure old version of this package, or it could just insert, you know, here's a dependency on some arbitrary malware, or it could just say, hey, the hash of this package that you're trying to install is actually this other thing. And here's this other thing. It's malware, right? So then you just go ahead and install that stuff, right? It can lie to the solver. It can lie to you. There's not much to be done about that unless you're going through line by line in the code or something, in which case, I can offer you a job. Right. So then you've installed something. You have malicious code installed here. And what I've just described, there are many, many single points of failure. A judicious application of readily available cryptography solves most of this, right? TLS is helpful and important. It makes those, you know, the man in the middle attacks a little more challenging. It's far from a silver bullet. Transport layer is just not always secure. And we just can't assume that there are so many different ways that, you know, you have to distribute software to someone who's behind an air gap. You have to distribute software to a variety of mirrors that are, it's not fundamentally dependable. You need a more solid solution based in digital signatures. So we can do a lot better by focusing on something end to end, right? From the repository side, all the way to the user, some means of checking. In general, this is the purpose for which digital signatures exist. Someone can tell you that, yes, this is the stuff that we made for you. They can identify the stuff in what is effectively an attestation about the software and you can verify that attestation. Critically, you can sign a package as soon as it's built. We build packages so we get to do just that. And you can verify it at the end when a user is about to install it, right? Providing range of protection from build to acquisition of the package, you know, installation. This is, it is, and I'll show you the internals. This is a bit complex. It's certainly less complex than the sum of every transport mechanism that you will employ in a complex ecosystem. So I intend to skip the basic crypto here. It's a small crowd, so hashes and signatures and stuff like that. I don't have to skip it. We can go over the overview slides. It's pretty quick. Does anyone feel like a refresher? That's a no. Okay. Cool. So some design principles. These are inherited from the update framework, which now is, if you're not familiar with it, it's in a lot of your friends, right? So SigStore uses a variety of delegation mechanisms that are derived. I think, you know, from the update framework, feel free to yell at me. So tough is a design specification and implementation that's intended to provide substantial security improvements to package managers and software updateers. It does this by adding verifiable records about the state of the repository or application. And well, we'll get to what makes it sort of unique, but which is basically the trust architecture and responsibility separation. Tough is the result of the collaboration of security researchers and engineers working on software updateers. Its design has influenced package management across industries, etc, etc. So one of the key principles here is compromise resilience. So in order to achieve compromise resilience, in other words, in order to achieve a system in which a variety of compromises are to be expected and not fatal, you need responsibility separation. You need to be able to separate, you know, the roles and keys and sets of keys that are able to revoke, essentially revoke trust in other levels of the system to reassign trust and to regularly sort of rotate things. And you need that to be insulated from frequent use. The design here allows it to be sort of very, it can be very inconvenient to use these keys, making it very easy to make them offline. And I think I'll briefly go over what we do for our root key management and signing sort of ceremonies. So then in order to allow that root level of trust to not be used frequently and thereby end up inevitably getting put into insecure places or insecure processes, you then have delegated trust from that root. You might have something like a key manager for client X or Y or something along those lines, right? And then they can delegate further down that line. So that's a core of sort of what drives the update framework. Another critical thing, which I'm pretty sure is in Sigstore, is threshold trust. A lot of names for this, I think that's the one that we still use. So this is a notion of requiring a consensus of authorities. So you can list four public keys for a role and say, I expect any communication from these keys to be signed by two of them or three of them or whatever. All right. So I will not go into the, sorry, I will not go into the details of all of the roles in the update framework because our system is slightly simpler. But in principle, you have the root of trust, you have an authority for content integrity, you have an authority for freshness, an authority for consistency, things like that. Okay. So in practice, the way that it works for condescending verification and most of the content trust features that we have and or are working on, we have, you know, the fundamental thing to secure is a package, right? So for example, PyAero version 0.9.0 build, et cetera on a particular architecture, right? Along with critically, it's metadata, right? So the metadata for this package is part of what must be secured. You cannot just sign a package and expect that to be the end of the deal, right? If someone can tamper with your dependencies, if you're still in a dangerous place, even if you're expecting all the packages to be trusted, right? There are old packages on repositories, we cannot always remove them. In fact, we generally can't. So, I mean, among other issues, right? So if the, when the metadata is also, I'll get to that, metadata is also something we need to protect. So we produce a signature on that metadata once it has been produced. And I will show you one of those. And then this is the part that is frequently neglected, I guess, still is, you know, great, you have a signature from some key. Do you trust it? Is it, is it from the right authority? Who is this key, you know, held by? And for that, you have this trust metadata chain that I talk about as a critical part of the talk. Okay. Right. And so you have delegation from one to the other. This incredibly noisy slide is mostly here, so I can refer to it if there are questions. But I will blurt out a couple, you know, maybe interesting pieces. The way that we do our root keys is we have, we create, you know, for the fans, we create ED25519 keys on card on UB keys that each key holder, this is for root, that each key holder themselves like provisions. So they get a blank key, they can wipe it if they want. And then there are, you know, very tedious, not that long, actually, but very tedious human instructions using a minimum of possible code so that you don't have to trust as much. The only dependency I think we currently have for the, for the root signing process is a GPG version. When we have a variety of, like, there are hashes that we've saved and have these rotating emails and all this tediousness to try to make sure that the generation of these keys is done on the hardware device so that it cannot be removed. These are, these are sensibly tamper-proof devices, right? So if you try to mess with the device, it just erases all the keys. Right. So each person has one of these. They provision them. We share them. We share the keys amongst ourselves, you know, multiple modalities. And then we all sign using our key. We sign a document this, which maybe I can zoom into. I suspect not. Is there a way to zoom in like Google Slides? It does not look like it. Okay. Sorry. So this, you know, thing on the bottom left here is essentially the root metadata. The critical thing it does there is it lists here are the authorities for root. And then, you know, I think I've elided that because it could be long. And here are the signatures by those authorities designating this as trustworthy, right? I will get to the bootstrapping trick in a moment. So that's, that's sort of how we handle root. And there is a, you know, a careful updating procedure that comes from tough for how to fetch new versions, verify that I'm using the prior version and then discard the prior version and now you retain this, this root chain. Root can, like I said, it can delegate to other authorities which might have online keys or offline keys. They might have, you know, much reduced authority, right? So we have keys for signing S bombs, which are, you know, generally speaking, I'm much less, it's important to get the S bombs right. It's important not to lie in your S bombs. But if you get the wrong S bomb, you're probably not going to have your system compromised. You know, at least there's there's some mediation there. So we have because we have different levels of authority, it's important to have responsibility separation. And so that's the kind of thing that that this key manager role does in this case is it can delegate to an S bomb signer, it can delegate to a package signer, you know, for XYZ repository or, you know, ABC, you know, client repository or something. And then, you know, fundamentally what it comes to is we have an online key, right? In the end, there is an online key that sits in the build pipeline, you know, in a couple of different places, depending on the particular build pipeline. And that key, whenever a package is produced, it produces a record. It goes to this, it goes to this, it produces a signing request right away. The system that is capable of creating signatures, or use the signing request, signs it, returns the signature, and then we're off to the racist. So that's fundamentally the authority here, right? And then there are algorithms for verifying these things, which I will not trouble you with, I think. Oh, I probably should have jumped this slide before describing it. This is sort of a brief summary of it. Okay. So yeah, on the user side, you use the trust metadata to check these signatures. So you check the hashes and dependencies and cetera, all the stuff that's all the stuff that's inside the package metadata. And then once you verify that, and you trust it, it includes a hash. So you just use that hash to verify the package. You never actually have to sign the packages themselves, you just sign this metadata over them. So, okay, I guess I can stop for questions, because it's very slightly more complicated, not much, I think. Yeah. Oh, am I providing mine? No, I'm sorry. Yeah. That's what I get for not paying enough attention in school. Maybe you went over it and I missed it. You signed the metadata, so that addresses some of the dependency problem, right? But what about or latest, where latest is not defined at the time you're signing the dependencies? Yeah. So you have to know to expect, you have to know to, so if I'm interpreting it correctly, what you're asking, I should, what you're asking is suppose there's an additional entry on the repository that is ostensibly for a newer version, you know, 3 million 0.5, right? A fast forward attack in a sense. So the package manager, any system that you're using your signatures on has to know to expect a signature, and it has to know from whom to expect those signatures, right? And so you can send, you know, you can send any metadata you want to the end user. You can sign it with whatever key you have, as long as you don't have the key that is on the in the build pipeline that's trusted at the time, they will just reject it, right? Now there are configuration options. This is as yet, it's not on by default. And I think that I think even if you turned around the primary configuration, just issues of warning. So we're, you know, this year, I'm hoping to roll out like the default on version of this, but it is available and functioning if you if you if you want to use it. Did that answer the question? No, okay. Yeah. For those who for some reason, crazy reason, don't know that's that's a really good question. So the, so we're, we're producing all the packages for this set of channels for this, this curated set of channels. And because we build them, we are able to immediately sign them. And you can expect that anything you acquire from these channels should be signed, right? There is a broader work. There's folks at prefix.dev. There's, you know, PyPI has been trying this for some time, community repositories where you can, where you can upload things and then have them signed by the repository or an intermediary on the way there. Those exist. There are mechanisms for that. The remaining thing for us is that we have to, we have to decide on, you know, we have to make sure that the community is in sync on this, right? So there's a SAP, a content enhancement proposal that we have to push through and make sure everything is everyone's okay with it. And then we have to have a mechanism for registering and getting either getting a dedicated key from us or you tell us what your public key is for what you're going to sign or we use something like Fulcio, which if you're familiar is like an, oh, I, yeah, okay. So, so, but that's not something we're doing yet, right? It's part of the, it's part of the delegation system, right? That we can, that we can say here is the key you should expect XYZ set of packages, however defined to be signed by because they've registered on this, you know, custom channel. So anaconda.org you can upload whatever you want to your own individual channel. That's the question? Okay, cool. Right. Okay. So, okay. So what differentiates basic designs for this kind of content trust problem from hardened ones is generally, in my experience, it's where in the pipeline you make your signatures. So how soon are you doing it? Are you throwing the package into the internet first to do a bunch of things with it before you sign it? You know, where do you sign? Where are you verifying the recoverability and resilience of your trust architecture, which I described a little bit of what that looks like. And again, that's based on, you know, responsibility separation and like things like multi key thresholds. And then how trust is established regarding what keys to expect signatures from, right? So the delegating architecture and bootstrapping trust, which I said I'd get back to. So I will. Okay. So that's where a lot of the subtleties lie. And some of these are where increasingly popular tooling out there, like six store and salsa, which are both truly wonderful. I think, I think, you know, unless I'm not up to date, I don't quite have solid answers to yet. There's a talk after this one by Marina Moore, which I would encourage everybody to go to, which explores, it's a little more narrow, I think, and it explores how to integrate some of that into six or I think is that one. So take a peek. Let's see my next slide about. Okay. So the bootstrapping problem. You know, when you get an operating system and you install an operating system, you get root certificate authorities, right? You get root certs included in that. And so that's part of how you always try to maintain this chain of trust. The same is true for Kanda. Whenever, however it is, you obtain Kanda. And there's, you know, there's a further, you can always go back, there's turtle turtle's all the way down, right? We can explore what the, this trust is based on as well. But when you install Kanda, when you get Kanda, however, as you've gotten it, there is, you know, a fixed piece of metadata in that version of Kanda that is your fallback root of trust. So the very first time you run, that's all you've got. It might be version one of root. And that allows you to go out and get version two of a route. If there's some, if there is a version two of a route, these are not frequently updated and verify it using one and so on, right? Now, how do we trust that? Well, there's, so we do GPG sign, you know, our Debian packages. We use Windows code signing for Windows packages. We use Apple code signing for the Microsoft ones. But fundamentally, you know, there is, you know, it's not a perfect link there, right? I can't assume that everybody has like some sophisticated TPM, right? So that's where we are, right? We lean on operating systems and existing distribution networks for initial installer distribution. And then once you have Kanda, Kanda updates itself, so it checks everything, right? Okay. I can, if it's of interest, I think we're, I think I'm rapidly approaching time, so maybe I should stop. Yeah, let's skip all these changes and some examples. I'll post the slides. So do I have 40 minutes and then questions or is it 40 minutes including questions? Oh, okay. All right. Why don't we stop there? And any other questions anybody has? Okay. I mean, I'll stick around and say more things if you want. Okay. Okay. So that was sort of the most direct application of what we'd done in an Kanda, which I'm, you know, there's more to do for sure, but I'm pleased at least with progress. Other things I've had to learn working in, you know, a mid-sized air quotes startup that's been around for 11 years, but continues to reinvent itself, you know, one of these situations is, is that supply chain security is architectural. And in order to have, you know, in order to have a secure build network, in order to have secure processes, you depend quite a lot on the systems thinking of the tech leads and the systems thinking of the, of, you know, anyone who's involved in the maintenance of these systems. That's what I mean by it's architectural. So some folks from large companies might find what I'm about to say really obvious, I don't know. But at the places I've worked, this has been a place where improvement was key. So for example, it's easy to say that you need a threat model, but threat modeling means a lot of things. In almost any company, you can't drag every team lead through like a sophisticated specific threat modeling framework. So I don't bother. We write our own minimal one. The key as far as I see it, yeah, is to help people think about how to think about systems and architecture, rather than focusing on, you know, the functionality of an individual feature, they have to really understand where they're getting every piece of data they depend upon secrets and otherwise. That has meant that in order to get, you know, in order to make progress in a lot of places, I have to make sure that there is a maintained diagram, very detailed maintained diagram for all the projects with all the inputs with all the outputs with with security regimes like like shifts in one regime to another, delineated clearly. You know, when you when you make a request to a Lambda to run something for you, where are you storing the AWS credentials? How are you going to get access to those? Are there requests being verified in some particular way? Is there some key that holds the store to that that you then have to go get? I have seen systems where, you know, there are like three levels of interaction, where like a key is put, you know, in some vault server, that you then have to get out of last pass or something and the key to access that thing and so on and so forth. And all you're really doing in a lot of these cases is adding more avenues for attack and making it much more difficult to track use. So yeah, so all those things that, you know, have to be on the diagram, you have to you have to note where secrets are being retrieved from. What else? Yeah, your enemy is humans, it's time, it's complexity, it's lack of understanding and it's lack of ownership. And if you don't have checklists for like major changes that include things like update this diagram or else this will not be merged, it won't happen. Yeah, so what else? Yeah, so do I have that next? No, I'll get to that later. Yeah, so buzzwords will get away from you. Salsa is magnificent, it's wonderful, and it's also, you know, an effective lever for driving organizational engagement. It's much easier to say salsa than to say like, here are my large concerns about the build system and here's the, you know, the six primary attacks that really make me anxious. How do I compare their impact to the other things that you're thinking about at the top of the organization? But everybody understands, you know, a compliance framework, right? Well, you know, everybody understands the advantage of talking about a compliance framework. Yeah, so, you know, you have to be aware of zealous security theater. So we've gotten, you know, we've gotten a lot of push, a lot of value in using salsa as like a lever and a focus. But you have to be careful what you're playing with in your organization. There will always be efforts to, you know, trivially integrate, which we have avoided, we have avoided all of that just to be clear. Anaconda is great. But there is always a struggle, right? So another, I don't recall who was talking about this actually. One of the talks on Thursday actually talked about this a little bit, the threat of regulations coming early in the sort of development timeline of security tooling. You know, this has been a quick push. So the NIST guidelines, I'm forgetting the number now, but relating to the software, secure software development framework and life cycle, regulations are kind of early, right? So like a lot of the tools, for example, S-bombs, frequently not well defined. A lot of people don't know how they're consuming them. Again, yell at me if I'm out of touch. I fall behind a couple months, but that's my take currently. So those, you know, those being mandated early means that a lot of people are just going to push what they have, right? They're going to produce S-bombs. And, you know, we also, like we are in an iterative process of making our S-bombs more inclusive and interesting. If you've seen what Yachto does, I like theirs. I liked theirs a year ago. It's pretty cool. But I do see quite a lot of S-bombs that are not very good, right? That will describe things like patches that may have been applied but haven't necessarily been applied or will fail to note compiler flags and, you know, in numerical calculations at least, and a lot of other things, you know, those are critical, right? That you can break your calculations if you have one package that's trying to use different compiler flags. Right. So, okay. So one of the things that we've done, a number of folks, I think, have just said that, you know, there is no way to have enough security professionals. And that is true. You know, I barely have, you know, we rarely have like four. And it is certainly not enough. So leverage is critical. So as I said before, like, it's good to have all, you know, all your tech leads have fluency in a certain, in certain kinds of notions, right? And so one of the things we're trying to do is we're rolling out this security champions program. It's a pilot at the moment. And I'm, you know, I'm enthusiastic and I like it. I encountered, I think AWS has something like the security guardians or something along those lines. So I mean, I've seen other things out there. This is not in any way unique. But the idea is to sort of, you know, bring everybody along. It's a team sport. Yeah. Yeah, let's stop there. So I'll share links. I'll share links to some of these things. Any last questions? Yeah, I'll upload them on to sked. So I've not done that because I've made quite a lot of changes. I will. Yes. Okay, cool. Thank you.