 This is my fourth talk this week, and so I'm happy to be up here and Hopefully we'll do a great job, and then I can go have some fun finally All right, let's get started so Jason and I are here to talk to you guys about securing the IAC supply chain I'm Jesse, and I'm a senior principal software engineer here at Odesk and I'm currently working at the juncture of our security and compliance teams and our developer enablement team and working right now on securing secure software Best practices into our infrastructure as code tooling Hi, and I'm Jason. I work at chain guard a chain guard. We help companies like Autodesk adopt supply chain best practices like signing and verifying using sigstore and thank you for that Jason I To understand a little bit more about why we're doing this work I just want to quickly give you guys a little bit context about Autodesk We are a software engineering company building software for engineers. You could say that we Make software for people who make things or a leader in design and make tools across a ride a wide variety of industries From entertainment to architecture and engineering to construction and manufacturing Our customers and their workloads are pretty diverse, but they do have one thing in common They depend on our software to do all of that building and making and likewise our product teams depend on our platforms To build and make that software. So with that in mind, let's let's level set first on on some definitions starting with the infrastructure as code Probably familiar to many of you, but what what is it? First of all, it's code. So it's codifying the intent of a developer good or bad And as code it's repeatable diffable and automatable And second, it's usually executed in highly privileged environments creating everything from networks to user accounts You can think of it like the crew of distortion enterprise where you snap your fingers like Picard and say make it so Common examples are cross-plane Pulumi and Terraform, but other cloud vendor specific tooling exists as well Next software supply chain, which is very similar to what you would think in supply chains generally It's all that upstream stuff that you use to build your product, right? In the software world It's usually code source or compiled that you may or may not have had anything to do with in creating There's a lot of code that runs in and around production that you didn't write OS packages language packages Kubernetes the OS running Kubernetes your hypervisors VMs developer tools Build tools your delivery system your IEC and its plugins, which is what we're going to talk about today And increasingly much of that software is open source, which means that you don't necessarily know who wrote it you don't necessarily know if you can trust them and How can you hold them accountable if they do bad stuff? All of this makes it juicy target for attackers So what is cloud OS or our cloud to form a platform or buildings this stuff into? Three years ago, we had a strategy at Autodesk to reboot how we do our continuous delivery And much of the early UX was informed by our previous homegrown CD and automation tool and to ease the users transition between these an abstraction layer was built and To match that that prior experience Central to that abstraction is something that we call our ADF or application definition file Which is essentially a declarative manifest of infra and app artifacts to be deployed Our ADF starts its life as YAML and then through our pipeline tooling built on go and jsonnet is turned into spinnaker pipelines and terraform Which in turn produced the cloud resources dependent upon by our products But but what of that terraform right? Where does it come from? Well frequently it's created by our internal teams But often they rely on open source providers like those produced by hashy corp and other folks But how do we really know who produced those providers? What if they were produced by a bad actor to be fair, it's not just terraform that has this problem You see that the cross clean logo there almost all IAC tooling comes with some pluggable way to extend its functionality And usually there are numerous community-contributed extensions I see is great and it comes with all the benefits previously mentioned But it does require a lot of permissions to do its job. It needs that power by design But it's very important that it's secure But whose job is it to make sure that that IAC and its dependencies are secure? Right well this company that we we won't name here is clear in its docs that they don't provide any controls to stop malicious actors Stop malicious providers or modules from exfiltrating your data They don't say it explicitly, but there's also nothing stopping those providers or modules from say sending your credentials to some third party Or what about installing a backdoor in your environment with a provider specially crafted to install? commanding control lambda or through a container or VM image We should note the terraform does do some checking that the providers are not tampered with between the runs of your IAC The lock file which can be checked into your scm stores the known hashes of the providers at the time of their first use Right However those hashes are not signatures. They are hashes providing only integrity guarantees In fact the documentation makes it clear that they're providing no authenticity guarantee We do not know the providence of that provider at the time of that first run There is a digital signature thumbprint that's output for providers that are pulled from a terraform origin registry However, they are trusted by default simply by virtue of being in the registry Terraform leaves it up to you to figure out whether or not you want to trust the signer right and it only does so after it has installed it So how easy is it to publish a provider and gain that implicit trust across the entire ecosystem? well Publishing a new provider or module just requires that you have a github repo Here you can see that someone has forked a the official AWS provider and started publishing From a github account that we've that's named hash l corp How many of you actually know where this is going right? It's pretty obvious You see what happened? I'll let you leave it for a second What production do we have from typo squatting right? You can see in the web GUI there is an official provider message But how often do you use the web GUI to source your providers? I don't know. I haven't really been looking at that web GUI very much Isn't it more likely that you copy and paste or we use some code or you know follow a blog post somewhere and You end up with you know that stanza that you see there on the right These are some users usage usage statistics from some other provider fork inside the registry That's un-maintained currently un-maintained AWS provider Who's installing this un-maintained fork 14 times this week? Were they just confused? Here we see a module offered by the Department of Defense or the supposed Department of Defense Right, and they're using the official AWS logo How easy it for a novice user to know whether or not Amazon actually has blessed this Digging deeper. We see the DOD has quite a few modules 75 as of now and counting How can we be sure that this actually is the United States Pentagon behind these? Again, most ISE tooling suffers from this problem You can see here that that cross-plane community modules have the same have a naming collision with the official upbound modules That may be by design right now as cross-plane packages are currently curated by the community But how long until that curation process can't scale and needs to be relaxed So how do we defend against this? Well, currently a lot of mitigations rely on human behavior. We have mandates on repo usage aka human enforced provenance There's also version pinning and typo checking, but we know how fallible these can be right There's static analysis tools like check off tariff scan and TFSEC But these are all based on block lists of known bad things And we all know that block lists are a losing arms race. Just ask your local antivirus vendor Finally, there are some tools out there doing dynamic analysis, which are either using using sandboxing in which case context matters whose AWS account are they using and They also might allow that code to run in your account and then quickly clean up afterwards if that does some naughty things But that might be your prod account, right like I see is used in get ups commit to prod right So what we need is allow lists a walled garden approach But unfortunately as you saw earlier the tooling needs a lot to be wanted it feels a little bit more like this Than this We want to create a closed ecosystem of approved IAC and thankfully cross-plane gets us pretty close By centralizing the execution of our IAC and layering on Kubernetes RBAC We can choose who has access to install manage providers. We can also choose what accounts and credentials those providers can access But we still have the problem of enforcing That those private providers were truly created by appropriate parties, right? How convenient that cross-plane shows OCI images as their package format? I'm gonna pass it over to Jason to explain why So cross-plane shows to use OCI packages or OCI Images as its package distribution mechanism, and this is really important Sigstore is a project of the LF that Aims to sign OCI images and many many other things I'm here to talk about Sigstore in one slide Sigstore is a number of components. The first is recor, which is a transparency log. It's public Verifiable. It's an append only log of things that have been signed Falsio is a Identity and certificate service it issues short-lived surrounding certificates exchange usually for OIDC's credentials You basically say to Falsio here is proof that I have offed as Jason at chain guard with Google Please give me a certificate to that effect that I can use to sign Subart effect these two services are run as a free public good infrastructure They were announced as being generally available yesterday There's on-call support to find SLAs and you can run these yourself if you want to Finally the last component of the Sigstore bundle is this cosine CLI Cosine signs OCI images and stores the signatures alongside those images in the OCI registry Not just OCI images, but cross-plane packages as well You can use cosine to attach and download other non-signature type things to container images And you can sign and verify other non-container things, but we're not going to talk about those we're going to focus on cross-plane packages So I'm going to walk through very quickly the the simplest and then a little more complex way to use Sigstore the first is if you already have Public and private key you can use those to sign a container image today you simply take that private key sign the image and Push it to the registry push the signature to the registry alongside the image and then to verify it You distribute that public key securely to those people who might want to get it and then also Update it and make sure that it's securely updated to everybody and that sort of falls apart but That's why we have come up with keyless signing which I think is really where where Sigstore really shines Instead of all that key distribution stuff We rely on falsio and recor and identity providers you already trust to securely sign container images Instead of having a long-lived public private key pair. We rely on these these identity providers So instead of having a key you first off to some identity provider like Google or github or Microsoft or others This is the normal flow where you like do it, you know pick your Identity provider and it just says okay. You must be Jason. Go ahead It it sends back to cosine and id token that cosine sends to falsio Remember falsio is this service that takes id tokens and returns short-lived certificates Co cosine uses that short-lived certificate to sign the container image and pushes the signature alongside the image into the registry and Then also writes to record this public transparency log Hey, this thing was signed using this certificate generated from falsio at this time and so Verification looks like Just pulling out the image from the OCI registry. You don't have to talk to Google or github or the identity provider You don't have to talk to falsio to verify You pull out the image in the signature There's enough in it information in the signature to verify that it came from falsio to verify who it was to verify Which identity provider was used to prove that that's who you were and that is enough to verify that that container image was signed by Your company's security team or your IAC team You can also look it up in recor to see When it was when it was signed and in this public transparency log This has been a vastly oversimplified Overview of what we're talking about all of that was in one slide remember so If you would like more information about how this actually works like at the wire protocol level There was a great talk yesterday from my colleagues Zach and Jed And a blog post accompanying it on our blog, but yeah highly highly recommend looking out for that talk So what does this mean for crossplane right? Crossplane packages like I said are already stored in OCI registries This makes them a perfect candidate for using cosine with them You can actually cosine and sign these packages today with no changes. All we need was to implement Verification of those inside crossplane itself. So before it starts to run this package It needs to verify that it came from a person with the right Credentials in the right identity provider, etc When you use keyful verification, you're only proving Was this package signed by someone holding the private key that corresponds to the public key that I'm holding If that private key gets out and it leaks out then there's not a lot of guarantee And if your public key if the private key was rotated and your public key hasn't rotated or vice versa Then you might be in trouble with keyless verification You can say was this package signed by Some person some identity according to a trusted identity provider like Google or GitHub And is that verifiably recorded in a public transparency log? So that's really powerful I'm gonna toss it back to Jesse to do a demo. Thank you. Thank you Yeah, so as Jason mentioned, there's like two operating modes that the cosine can do its validation with there's the keyless and then there's the keyful and You'll see here that we have an issue In the crossplane project number 3048 and a pull request 3360 And in those you'll see that we you know We have some dialogue about implementing the keyless strategy right now We're just going with the keyful version because it was an easier barrier to entry So at the top there you can see a starting crossplane with this experimental feature flag called validation pen and we're just adding the key material from the public key directly into it That's also going to change Don't don't take this as as as the the end design We're probably gonna have some other mechanism of loading keys from disk or from Coups secret or something like that, but for now Just to just as a validation of our proof of concept. This is this is the way that goes We're also adding the debug flag so we can we can tail the container logs a little bit later Then we see us actually signing a package that I had previously published to a private registry That's actually a cosign across our crossplane configuration package Both crossplane provider packages and configuration packages use their the package format So we're just going to stick to configuration package to keep this brief And you also see that we're using the private key pair To that public key to sign that package and that that last Jason showed earlier That signature is going to be stored right alongside that image inside the registry And then you see here with you actually using the crossplane kubectl plug-in to install that configuration into our our cluster And create the CRD that will ask the crossplane controller to reach out and grab this this package that we're passing in and here you see the tailed output from that crossplane controller that's running in debug mode and You know it sees that that configuration package install requests Which comes in as a CRD crossplane CRD and starts to attempt to reconcile it, right? I know that's a little bit garbled but again, this is all a proof of concept right now The pub key being used as output as a debug trace that'll go away, but we're just validating that it got the right key and Finally you see that the cosine is being used here to actually do the signature validation with that key and the message That the package actually passed validation And here you'll see us attempting to install an official crossplane helm provider package, right? That's being pulled directly from the up-bound marketplace registry and clearly it hasn't been signed by My pub key. I didn't do any typo squatting on here. That wasn't me earlier either And and it as such it fails validation. So to the slide where demo done. Thank you So so close some calls to action, right? This is why we're here First sign and validate your ISE if you can You can use cosine and six door which we love But if not, you know, maybe GPG or some other tooling that you have And depending on how your environment is set up. You can create your walled garden Second and this this helps with that last one centralized the execution of your ISE Create that that funnel that choke point that will allow for the governance over who can do what inside your cloud environments And third know your ISE sources get that visibility, right? Make sure you have the telemetry either through your git repos or your pipelines And and you know that will give you some actionable insights And after that then maybe try some of the static and dynamic analysis tools like Divi or Orca You know, they will give you some pre-deploy hints And that you know what the ISE might be doing You know you deal with the false positives and negatives, but it's better than nothing And of course try try try to run least privilege, right? I mean, I know it's hard Especially for new products or greenfield development, but maybe once you have some stability in your solution You can start to ratchet down those permissions For future releases. Thank you So again, I don't see anyone assisting, but I'm like good plan a little bit of time for questions I don't know if we have a microphone or anything. You can just yell If not just come up and talk to us afterwards. That's that's yeah, we'll be around Great. Thank you everyone. Thanks