 Yeah, so I'm Jesse, I'm a senior principal software engineer here at Autodesk, and I work between our security and compliance teams and our developer enablement team. I think developer enablement means some different things to different folks, but for us, it generally is the glue tooling that allows our software engineers to move faster as they build the software that our customers use. There we go. Which is actually kind of why I wanted to talk to speak with you guys today, because I feel like the six-star tooling was exactly what I needed at the right time to be able to help enable some of our software developers as we kind of move into some regulatory environments that we haven't been in before. Before I get into that, I should say a little bit more about Autodesk and why our workloads are unique. You see, we are a software engineering company building software for engineers. You could say that we make software for people who make things. We're a leader in design and make tools for a wide variety of industries, from entertainment to architecture and engineering to manufacturing and construction. Our customers' workloads are pretty unique. They do have one thing in common. They depend upon our software to do all the building and making. And as such, our software engineers require and depend upon our platforms to build and make that software. Historically, Autodesk engineering culture has been strongly oriented towards developer autonomy. Early on, this benefited Autodesk, and it gave our engineers a lot of sway over the way that they can innovate and implement their products. It also eased acquisitions as they could come in and remain largely autonomous. But over time, that autonomy led to a lack of oversight and made things more difficult. And as we moved into the cloud with ever more connected solutions, our security and compliance teams needed to tap the brakes on that agency. And because of that, we decided to make security and compliance first class citizen inside of our platform here in developer enablement. And this has been especially important this year, as we've been moving towards a FedRAMP moderate impact environment authorization. If you're not familiar with FedRAMP or the Federal Risk and Authorization Management Program, I know that many of you probably are. And specifically, though, the moderate and high impact levels, there are two things to know. FedRAMP has a continuous auditing process. It's not done once and done annually. It's evaluated monthly. And second, that the FedRAMP program has no exceptions allowed. You have to account for all deviations from your system service plan. And if there are any, you need to follow its own as a poem or plan of action and milestones, which effectively is a finite timeline for remediation. And some of you who do know are probably saying, well, what about deviation requests? Well, yeah, there are deviation requests, but they're highly scrutinized by the government of the Joint Authorization Board and your third party assessor. For instance, if you have a vendor who has an unpatched CVE somewhere in your environment, they require that you follow up with that vendor monthly, perpetually, until that's patched. So what controls make up FedRAMP and whose responsibility are they? Right? We actually can read it. As you might expect, there are a lot of them, from systems acquisition to cybersecurity training and everything in between. And in the Sanky diagram, you can see a mapping of the FedRAMP controls to the parties responsible for their implementation and operation for us at Autodesk. At the top, we have Autodesk with 284 controls. And beneath that, we have our AWS, which is our primary cloud provider with 37 controls. And then the last on the list and the little tiny slipper at the bottom, our FedRAMP customers are actually responsible for only four controls. And regarding Autodesk portion, as mentioned earlier, we've been moving towards trying to make these compliance controls built directly into our platform and taking responsibility off the shoulders of our product engineers to have to implement and then re-implement those controls over and over again. You can see here that only 62 out of the 325 controls at the top there are the responsibility of our product teams. And the teams that have adopted our CloudOS deployment platform, which is what we're building here in the development team, have had a 35% reduction when they moved over to CloudOS in their control responsibility. You can see CloudOS taking up a pretty big portion down there at the bottom, ever-growing. And they can actually use that time now that they get back from not having to implement those controls and all the different products that they're working on to focus on building that novel software that we're selling to our customers. And as you know, the Fed is beginning to pay a lot of attention to supply chain security. That SolarWinds attack in 2020, the Biden administration issued Executive Order 14.028 that targeted notable gaps in our cybersecurity hygiene. Of particular interest to today's talk was Section 4, which speaks directly about supply chain risk management or supply chain security. And with that order, NIST, the National Institute of Standards and Technology, issued guidance based on its existing work and additional best practices and controls. That guidance is starting to become adopted by the FedRAMP program, which often refers to NIST's special publication, 8800, for control requirements. And of these controls, we find quite a few that land in the software supply chain, like those of 800-161, and the new SSDF, or Secure Software Development Framework, SP-800218. The FedRAMP program also has supply chain controls of its own as well. For instance, the requirement of on vulnerability scanning for containers, which is what I'm going to talk a little bit more about. At the same time in the private sector, we have frameworks like SELSA, or Supply Chain Levels for Software Artifacts, which many of you know. Sure, this diagram looks pretty familiar. It's from their docs. And we can see numerous links in the supply chain that need to be fortified. All those red letters up there are vectors waiting for exploit. But I like to think about it from the attacker's point of view. What does it look like? What does a kill chain look like in the cloud for a cloud product? So here you see at the bottom, we have an attacker crafting a package with a nefarious payload. That source package can be included as a library, commonly used OS package, or a container base image, or it could be any combination of those three. I can imagine payloads lying inactive for quite some time until they are combined in the right environment right for the picking. This package is consumed by an unwitting developer, who then builds a container on a workstation and pushes it to his repository, committing source code along with the exploited library. That get commit will then trigger our CI systems and then contain those bundled payloads. CI may iterate on that a few times, especially in environments where we have large suites of interdependent software and those libraries could be built in such a way that the CI system becomes a pivot point into other software. That get commit build then triggers, when it's ready to be deployed, it triggers our CD system. And once a deployable artifact is ready, that CD system deploys it into the cloud. For some products, that CD system might go all the way to production, right, without even having any human intervention. Get ups for the win. Now the attacker's payload has been deployed, a command and control communication channel is set up and the attacker is able to pivot into other networks, possibly get access to a database of sweet, sweet data. And then finally, that data can be exfiltrated to any number of mechanisms, right? I'll pause so you can get the full suite. All right. All right. Very right. What can be done? How can we start to get ahead of these threats? These are the things our developers are thinking about, but like, do we want them all to be thinking about that? Can we take some of that off their shoulders? So at an Autodesk, we've been building on what we've been coining our trust telemetry and building that into our CI CD processes. Trust telemetry is the metadata that we can reliably produce through the different phases of our SDLC and be confident that it has not been tampered with or altered in any way. You might think that this sounds like attestations, right? For instance, the CI phase, when we build an artifact, we should cryptographically sign that artifact and ship that signature to a secure location for later validation. That signature is now a data point in our trust telemetry. And each data point on its own might not give enough confidence to decide whether a deployment is safe for production. However, that relatively weak confidence, when combined, is additive. And just like mining metadata in other places, the signals start to make themselves clear when brought together in the aggregate. And we can monitor those signals for deviation, right? Using tools like machine learning, we can alert on outliers. We can possibly automatically stop or build or deploy processes requiring human oversight to continue. Or maybe we can deploy into a sandbox environment instead of actually going all the way to production to do additional automated testing, security scanning before moving on to production. And the strength of those signals when mapped to our risk profiles will guide our solutions. So let's take a look a little bit more closely at what these signals that we're dealing with are. As we move through this SDLC process, we collect trust telemetry from our different build and deployment systems. In this diagram, you'll see the now familiar CI-CD lifecycle. And given that it's continuous, let's just arbitrarily start with the plan phase. Plan often involves people, people you work with, and hopefully trust. If you don't, you probably should not be making plans with them, right? Anyway, the plans eventually boil down to some sort of requirements or specifications of what should be built. And when those requirements or specs are stored, where those requirements or specs are stored should be inspectable and trusted. Let's use JIRA for example. Its issue IDs can be used as a lookup for understanding the decision making around a change. For example, a commit message with fixes issue foo123, where foo123 is the JIRA ID of the issue being fixed, should be a part of the merge commit when that work is done. Unfortunately, we don't have time to go through all of these. So let's just jump ahead to code where I have another example. Code can be thought of as codifying or transcribing the intent of a software engineer. As such, the engineer as an employee of the company should be able to attest that his intent or his or her intent was true or good. How better to attest than to cryptographically sign their commits, right? We can even force the commits to be signed by not allowing them in without being signed with branch protection. And then later in the CI phases, we can validate the commits for a given build we're trusted or built by engineers of the company. This is starting to sound secure. Looking at this in the context of an actual CI CD pipeline. Since we already went over to skip over to JIRA GitHub, except to say that JIRA can be replaced by anything like a wiki, as long as it's a true historical archive of the decision making. The purpose is to retain a back reference through the commit messages. So here we have signing commits as an attestation of developer intent. And then we see branch protection only allowing for sign commits. And then also that we're attesting to the merge checks we're done by a GitHub app or action possibly. And next we have the validation of those attestations and sign commits in our CI phase in Jenkins. And after that, we attest to the vulnerability scanning that was done during that CI phase. And also that the image provenance is based on trusted images, which is actually the steps I'm going to show you pretty soon here. We also signed the image denoting that it was built on a known good Jenkins, right? We need to know that it was actually produced at Autodesk. And then moving on to CD, we validate the previous CI attestations inside of our deployment tool, which we use Spinnaker. And then we sign a new attestation proving that our smoke tests were run and that the dev deployment was successful before we go on to staging and production. And then finally at runtime we validate all of the above before letting the container start. You can see that there's a natural break between our pre and post deployment checks. You might wonder why the need to perform checks multiple times. Why not do them as early or as late as possible? The idea is similar to that of defense in depth, right? Trust but verify. The earlier we can reject something the better, but can we really trust that that thing hasn't been tampered with the rest of the way through the supply chain? Also, while our systems will be polluted in later phases by bad artifacts and side effects, we should fail early here as well. And if a bad actor does get something deployed, how long before that gets cleaned up by some post-deploy process? How long does it take that attacker to pivot? There's a little bit of a race there. Is this starting to sound familiar? I bet a lot of these logos look familiar. It's probably because you've been watching the work by the CNCF on supply chain software best packages or the secure software factory research paper or the Fresca project demos, right? A lot of these things are dovetailing these days. And while our work at Autodesk predates many of these publications and prototypes, we are watching them and hoping to adopt and donate back where we can. One of the first places we're doing that is with a six-door project. So in this overly simplified diagram, you'll see a container image signing process in our image signing and provenance tracking architecture. The trust telemetry that's generated here can be used to satisfy our FedRAMP container hardening and provenance standards that you saw in one of the earlier slides. To facilitate this, we are building on six-door cosine and we're using in-toto attestations. You might already know both of these tools. If you don't, I'm sure most of you do. Cosine cryptographically signs the container images, stores them in an OCI compliant registry. And I'll speak about a little bit why that was handy for us. And then the in-toto attestations allow us to do things like store structure data as these custom product types to store our vulnerability scan results in one attestation and then the provenance data in another. And you can see both of those things being done inside the Jenkins job, which has been modified to include these attestations and signing steps and also that they're being stored in the OCI compliant store. Zooming out, you can see our full CICD process. Hope it's readable. And we can follow this trust telemetry as it's used by the systems. As we showed on the last slide, the signatures and attestations created in the CI phase in Jenkins and then stored in the OCI image repo. We also see the volume scanning service storing the image, the signatures of images that were created by our security team at the top there. The two of these facilities together, when combined, are what we're calling our trusted images. And we can ensure that the only containers built from our trusted images are allowed to be deployed. You can also see the check and the validity of that image data in the CD phase, kind of in the top right corner of that left-most box, or that right-most box, excuse me. And our validate image stage. And the validate image stage checks the attestations and signatures of all the images that are being asked to be deployed and makes a policy decision about whether or not to allow or deny them based on the trust telemetry that it has access to. And the policy is written in Rego. We use OPA to evaluate those policy decisions. Actually, the Rego is actually controlled by our security team. That's another useful note. So overlaying this diagram on top of our CI CD lifecycle again, we see that the attestations being used for our trust telemetry. And there's more work to be done to build attestations in our pre-CI phase and the planning and kind of like build up to actual software being written. And then continuously validate those attestations post-deploy. In the former, we're looking to create, like I said, custom and total predicate types for things like change requests in RFPs. And for the latter, we're hoping to extend some of our existing binary authorization tooling that we have in the cloud for support for OPA and Cosign so we can use those same policies in both places. So next up, that was a lot. But this is just the beginning. We're doing a lot of research into supply chain security internally, especially as we move into our FedRAMP ATO. We have things like machine identity coming as we move cross-cloud especially. S-bombs are gonna be a requirement soon so we are trying to get in front of that too. And we're adding more signing and provenance attestations to artifact types. One that you might be interested in that I'm doing a talk on tomorrow is our IAC tooling. And so we're trying to help build six-store products into cross-plane, which is one of our IAC tools that we use. And that's a 1155 in the KubeCon secure identity and policy track. Thank you. Any questions? Are we doing questions? We don't have to do questions. Okay, great question. Autodesk has been around a while. That's why. Yeah. Now we actually do have things like Tecton and other organizations. We do a lot of Mergers acquisitions as a lot of large companies do. And so we have a little bit of everything. But the reason that we attacked Jenkins first was because most of our legacy software and a lot of our cloud or core cloud products utilize a set of core libraries that we call our PSL or our pipeline shared library actually. And so it was a perfect entry point to actually just plug this stuff in. If there's no more questions I actually will just say one more thing about why we went with SIGS-Storbix. I think it's very important. As security professionals, you probably have a lot of acumen for this stuff. But as software engineers that are developing desktop software they may not have any idea about many of these things. And so we need to do a better job of making it easier for them to not have to think about it. Especially when we get into these regulatory environments where just one incident will cause hundreds of millions of dollars to be on the line. And so another thing that you may or may not know about FedRAMP is especially when you're a company that's built of multiple smaller companies and you have a FedRAMP environment that may span many different product lines. One incident in one product, repeated in another product is a repeat of the same incident. And that's not good. You're not allowed to do that. You have to solve that incident across all of your product suite across all of your subsidiaries. You can't have it be the case that it's a multiple. It's one ATO for the whole company. Thank you.