 I'm Christy Warwick. I'm a software engineer at Google. And my co-presenter, Wendy Dembowski, is also a software engineer at Google, where we both work on cloud build. You're probably wondering where she is. She's actually live on location. So we'll be hearing from her a little bit later. It's not actually live, but I want to pretend it's live. So we want to talk to you about using Tecton securely. The Tecton has a number of supply chain security features, but even with that, you don't get it out of the box. But even though you don't get it out of the box with Tecton, if you're a Tecton operator, there are things that you can do so that at least your users will get secure continuous delivery by default. But in order to set that up, you have to be aware of what the options are that are in Tecton so you can enable it. To explore those features, we're going to be revisiting a company that Wendy discussed in her previous talk at CD-Con, which is called Things Dogs Love, or TDL. Last time, we explored a few different personas. So this time, Peanut the Tecton operator is back, and we have somebody new, Disa. Wendy, what can you tell us about Disa? Disa is Things Dogs Love's newest developer. She's a junior engineer just learning the ropes and trying to familiarize herself with the CI-CD best practices. I think we've all been there. What about Peanut? Peanut is Disa's mentor, and he's also administers Things Dogs Love CI-CD pipelines. Peanut cares a lot about security and safety. Thanks, Wendy. I think that a lot of people in our audience probably care about security and safety as well. So last time we met Peanut, he'd put a lot of work into operating Tecton in a way that worked well for the multiple teams at Things Dogs Love. Wendy, could you give us a recap of what Peanut has done already? In the fall, we talked about operating Tecton in a multi-team environment. We discussed shared infrastructure, generating provenance for multiple teams, build isolation, balancing resources between the teams, reducing computing costs using autoscaling, and also reducing latency with Docker and Docker caching and balloon odds. Thanks, Wendy. It sounds like Peanut has really set up his teams for success. So let's see how Desa does. Desa is eager to be useful right away at Things Dogs Love. Her first project is a service called Go for a Walk. So right away, she creates a Tecton pipeline for automating the build and test process. And then because of that, she's able to get her service up and running in production in just a couple of weeks after she starts. Good work, Desa. But wait a second. Unfortunately, something has gone horribly wrong. Users are complaining that the service doesn't work. And when the SREs investigate, they find that all the service is doing is meowing. Wendy, has Peanut been able to figure out what's going on? Peanut found the provenance for Desa's built image. And from that provenance, he was able to locate her Tecton pipeline in source control. Now, it's no coincidence that Peanut was able to find the provenance. So here's the first secure default for running Tecton. Run Tecton chains in your cluster. Tecton chains is able to observe executing tasks and pipelines and create provenance for artifacts that are built. It's important to note, though, that this provenance doesn't really come for free. The tasks and the pipelines have to be written in such a way that they emit information about the artifacts they build so that Tecton chains can pick it up. A future improvement that we want to make here is adding something called Tecton artifacts. And this would add artifacts as a first class type to Tecton. So that way, if you're just using artifacts in your pipelines, that would be enough for chains to pick up the information it needs and to generate provenance. But fortunately, Peanut had a way to mitigate the fact that that doesn't come for free. And that's secure default number two, which is requiring provenance for all of your images before deploying them. Desa actually originally tried to write a build pipeline that didn't generate provenance because it didn't use type hinting properly. As a result, the image that she made didn't have any provenance and the deployment failed. Once she figured out what was going on, she went back to her pipeline and updated it to use type hinting so the provenance could be generated properly. And if you want to add this, a couple of options that you can look into are Kyverno or the Sigstore policy controller, which let you define policies for what gets deployed to Kubernetes clusters. And the Sigstore policy controller, for example, also let you create more complex policies with languages like Q and Rego, if you want. Another thing that Desa did right was using build as code. So that's the secure default number three, which is that Peanut set up remote resolution in the tecton cluster so that the tasks and pipelines could be stored in version control. What did Peanut find next? Since Desa used config as code, Peanut was able to track down the task conversion control. He knows something suspicious right away. A task from Catcorp. Peanut dug into that task. A task from Catcorp. That doesn't sound like something that the people at Things Dogs Love want to be using at all. Wendy, how on earth did that happen? With some exploration, Peanut learned that Desa chose a task from Catcorp that replaced the executable that was supposed to be built with the Catcorp executable. It didn't even bother building from the inputs that Desa provided, it just built its own Catcorp executable. Desa, how did you choose the Catcorp executable that you picked? Oh, it was just available there and you thought the documentation was correct, but you didn't inspect the task that you chose. I see. So the task that Desa found on the internet didn't do what it was claiming to do. Plus, when Peanut dug into it further, he discovered that the task was actually using a pod escape exploit to leave scratches all over the node. Obviously, Desa was trying to do the right thing. Fortunately, Things Dogs Love practices blameless postmortems, so of course, Desa wasn't in any trouble. Peanut used this as a learning opportunity to figure out how to set up Desa and other new employees so that they could have success from day one. Peanut identified two problems overall. The first one was that the random task that Desa used turned out to be malicious. The second one was that this was compounded by the fact that the task was actually able to cause side effects on the underlying infrastructure that could impact future tasks. Peanut decided to take on the second issue first. The first thing he wanted to do was make sure that even if somebody managed to use a bad task, it wouldn't cause side effects to other tasks. And even if a task wasn't malicious, something could happen by accident that could affect other tasks. Wendy, can you remind us about the isolation that Peanut already set up in the Things Dogs Love tecton clusters? The Things Dogs Love clusters had previously had namespace isolation, node pool isolation, as well as workload isolation at the level of the container. So even with node isolation, namespace isolation, and workload isolation, Peanut and Desa found out the hard way that that still wasn't enough and pod escapes could happen. Wendy, how is Peanut going to stop this from happening again? To prevent this type of issue from happening again, Peanut added sandboxing. He chose GKE Sandbox, which is a cloud-specific sandbox for the cloud that his cluster is running on. The way this works is by limiting the number of syscalls that are available to the code that's executing in the sandbox. For GVisor, for instance, it removes over 200 syscalls to go from 300 to 83 that are available. And this can limit the impact of what the escaped pod can do on the new. So in order to prevent pod escapes from happening, Peanut added the third secure default, which is sandboxing. So if you want to use sandboxing, there are a few things that you can explore. I think the first would be to look at your cloud provider if you're using one and see if there's a feature like GKE Sandboxing that you can use. You can also explore GVisor or look into using Cata containers. So Peanut had dealt with the pod escapes. Now it was time to make sure that nobody at Things Dogs Love would accidentally use a cat corp task again. So to do this, Peanut pursued secure default number five. Wait, was the last one four? I don't know. The next secure default, which is providing a secure catalog. Wendy, what did Peanut do next? Peanut decided that he needed to do more for his team in order to avoid them using random tasks off the internet. So he looked into the idea of a Things Dogs Love catalog. Peanut made a catalog of pipelines, tasks and images and he was able to consolidate on the tooling that he trusts so that his developers don't need to make choices about what to use. They just find what is approved and use it. Tasks can be forked and cloned from other catalogs and as a starting point, Peanut often uses the Tecton catalog. So the Tecton catalog can be a great place to start. It's a set of tasks and pipelines that are maintained by the Tecton community. So you can access that directly on GitHub, you can go to the Tecton Hub or you can go to the Artifact Hub, which will also let you search and use them. So Peanut went one step beyond just making a catalog though. He made it a secure catalog and that was using a new feature in Tecton called Trusted Tasks. So this feature makes it possible for authors to sign their tasks and pipelines when they publish them. And then at runtime, the Tecton Pipelines Controller will verify those signatures. Plus, Tecton has some initial policy support that lets you require those signatures on some or all of the tasks and pipelines being used. So that way, Peanut could create a policy that says only tasks and pipelines from the Things Dogs Love catalog could be used or maybe from that catalog as well as the Tecton catalog or any other catalogs that he trusts. He gets the ability to control where the tasks and pipelines are coming from. So by creating this trusted catalog for Things Dogs Love, Peanut makes sure that even for brand new employees, the tasks they use will be trusted. Also that way, engineers don't have to worry about figuring out which tasks to use, they can just go to the catalog. If they can't find what they're looking for or if they need some changes, well, Things Dogs Love owns the catalog so they're empowered to make whatever changes they need. And those changes will be backed by version control and code review and all that good stuff. So thanks to the learning experience that Disa and Peanut had, Peanut was able to fix the two issues that they uncovered and make defaults that would make it easy for Disa and everybody else at the company to build safely. There are a few more best practices that Peanut wants to explore, which we didn't get to today. So next on his to-do list, he wants to rebuild public images. The idea is that any images that are used by tasks in the Things Dogs Love trusted catalog are rebuilt by Things Dogs Love to make it harder for something malicious to sneak in that way. I think I've said Things Dogs Love about 30 times at this point. I'm gonna use TDL, but then I think it'll sound like an acronym that actually means something. Anyway, so he wants to rebuild the public images. Next, he wants to add policies to the TDL catalog to the repo to make sure that all of the images are pinned. That way, there won't be any mystery around which images are pulled. Lastly, he wants to add more tasks to the catalog, for example, tasks for S-bomb generation, for image scanning, and then he can use those to build pipelines that put these things together with build tasks so that in the future, Disa could, instead of Disa having to discover all of this on her own, she could just use that pre-made build pipeline. So here's a quick review of the secure defaults that Peanut and Disa explore today. And there are five, they definitely lost count somewhere along the way. So first of all, running Tecton chains gives you automatic provenance generation, as long as you use the type hinting, which should get easier in the future with artifacts. Two, add policies to your production deployments to at the very least require provenance to exist. And then later on, you can grow these things into more complex policies if you'd like. Third, use Builders Code. So when you install Tecton pipelines, enable remote resolution so that tasks and pipelines can actually be fetched from version control. Fourth, use sandboxing so that you can make sure that tasks that are running can't actually impact the infrastructure that they're running on and cause unexpected side effects to other tasks. And fifth, get complete control of how your developers build and deploy by creating a secure catalog. Wendy, I hear that Peanut and Disa have something that you'd like to add. Disa, Peanut, and I would like to thank you for joining us today. And we hope that you can learn from the mistakes made at Things Dogs Love in order to have your own secure CI CD pipeline. And thanks very much for me as well. Wait, Wendy, you have a late breaking update from Disa. Disa wanted to learn more about continuous delivery. So she picked up Grocking continuous delivery and is working her way through it now. I hope Disa enjoys it. If anyone else is interested, here's a discount code for the book that I wrote as well as other Manning books as well. All right, thank you very much. Does anyone have any questions? So the question is, if you're using a secure catalog, would that block you from, oh, there's actually a microphone if you want to. If you have a secure catalog, as you're setting it up for your pipeline, for your developers, are they blocked from getting tasks from any other spot? It's up to you. I think right now, the way that the policy works, you can either say that all tasks and pipelines have to come from one specific catalog, but I think realistically, you probably don't want to do that initially. You could also say that if tasks and pipelines come from that catalog, their signatures will be verified, or you can not require any signature verification at all. But we want to add more policy support in the future so that you could express more complex things. I don't know if Billy has anything to add to that. Okay, Billy just agrees. Okay. You can also add additional policy validation. So if you're using Kiberno or anything else, you can just say, oh, I'm only gonna allow things coming from this catalog. So you can layer whatever you want on top of it. Cool. Any other questions? All right, well, thank you for this, thank you for viewing our short and sweet talk, and I am so grateful that the recordings actually worked, otherwise it would have just been me. And thanks to Wendy for putting together all of these recordings in advance. All right, thanks, everybody. Thank you.