 Today, yes, we're going to talk a little bit about the thing that I will insist on calling Sunburst because SolarWinds is, in fact, the name of my employer and not the name of vulnerability. And from here, we will use my company's lovely template. So a little bit about me. I'm Trevor. I'm from Austin. I've been a principal architect at SolarWinds for about four years. And I specialize in the CI CD and DevOps area of the SaaS side of the business. I do a lot with Go and Kubernetes and all lots of fun things. I'm familiar with anybody attending the CNCF con. And I've been doing a lot of that kind of stuff for about the last five or six years. Before that, I spent a lot of time building penetration testing tools, offensive security products. And so when the big bad stuff happened last December, I didn't know anything about the product that was compromised because I'm not on that side of the business. So they asked me to start thinking about what would come next. So what does SolarWinds do? I guess the big headline is that we're the market leader in network management software. Most of the Fortune 500, I think about 80% of it, uses our stuff. Pretty much every government agency you can name, universities, mom and pop shops, all the way up to tier one network providers. No matter who you have for an ISP at home, chances are they are probably using the Orion platform somewhere to monitor big Cisco routers or to help run their knocks or things like that. In the last five or six years, we've also bought a number of SaaS offerings that you might be familiar with. Things like Pingdom and Logley and PaperTrail, a lot of things in the APM and event space. We have over 50 products because we've been around for around 20 years and have been buying things that whole time. And one of those was compromised in the attack known as Sunburst, and that's the Orion platform. So this talk is 30 minutes long, which is not nearly enough time to give like a comprehensive introduction to supply chain stuff and outwardly take you through every aspect of what we're doing. I'll be leaving out some things around securing dependencies, around how we leverage Aspong files in the build process, the merits and pitfalls of signing every Git commit, and some of the deeper details of how we're trying to dig in and get around some of the surface level non-determinism of certain build technologies. We do intend to open source everything that we've been building sometime over the next couple quarters. So obviously in the coming months we'll have a lot more technical detail available. But what I'm going to focus on today is what we call internally pipeline mechanics. And that then I'll end the talk with sort of a grab bag of implementation details and folks of course can ask questions after that, which I will answer as best I can. Fair warning, I can't necessarily talk about some things that you might have questions about with regard to things that maybe didn't come out in the media around Sunburst. But I don't even really know that much secret stuff anyway, but I'll do my best. I think it's important to define what we mean when we say supply chain attack. So Google has a pretty good, very general definition, which is just simply unauthorized modifications to software packages. But I like to take that a step further and say that we can divide that into a couple of different categories. We've got third party code compromise. That would be something like Code Cove. What we use in your system got compromised and then because you included it in your system, your build process or whatever, your thing got compromised. There's a first party code compromise, which not even code, sorry, first party compromise, which is what happened to us at SolarWinds, right? A system that we own that we maintain was compromised and some bad things were done as a result of that access. And then as kind of a joke, I've added like a third type here, which I'm not sure if it even represents it, because I don't know if Microsoft ever has divulged exactly what took place when they somehow were tricked into signing root kits earlier this summer. If anybody knows, I would love to hear about that later on, because as of like a couple of months ago, they were still silent about it. But I put it up here with these question marks because maybe it represents like a third style of attack. So what happened in the sunburst? I'll first say that there is an amazing blog post from FireEye who was the security vendor that originally discovered this and it is insanely detailed. You want to know absolutely everything possible to know about the malware and about sort of, you know, the way that it worked. You can dig in and if you just Google FireEye sunburst blog post, you'll find it. But basically what happened was our build system, the Orion build system, based on Team City and not the Kubernetes version, but the kind with like long running VMware based build agents was compromised. And as with any hack, attribution and initial vectors of compromise can be extremely difficult if not impossible to ascertain for certain. But what we do know is that once they gained access into our dev network, they found their build agents and they put a malicious DLL at the right moment in there. If you don't know what a DLL is, it's just a library. It's a piece of compiled Windows code. They actually, we determined they actually kind of stuck some dead code into one of our releases. And then when they were decompiled it after grabbing that release, they saw that that code was there. And so they knew that they could then insert something more interesting and malicious. But there was a lot of sort of camouflage. DLL had a very innocuous name. It used a lot of naming identifiers that would have looked normal to any SolarWinds engineer who worked on Orion. This was all directly with compiled code. There weren't no compiled artifacts. There wasn't a source code compromise at all. Once it woke up, it would mimic call home traffic to our stats portal. And as with a lot of contemporary malware, it used DNS for command and control, right? So it was pretty interesting. It would wake up after some number of days. It was kind of like a random number of hours that would go up to 12 days. And then it would emit some DNS request. And then depending on what it got back, it would either wake up and do more stuff or it would just stay completely dormant. There were certain IP blocks that if it was found within there, the command and control server would just say, stop, don't do anything. So that was pretty interesting. So once we found out about this from FireEye, which happened December 13th of last year, we had to start turning the entire company upside down. And that is not just the Orion platform, which all by itself is a pretty big challenge at 10 million lines of code developed by thousands of people over two decades, but also everything in our cloud business. We had to rotate every single password and secret across dozens and dozens of Amazon accounts. Obviously, we had to decompile huge numbers of build artifacts that we had sitting around as part of our process and sort of make sure that if we recompiled from source that it matched those DOLs. We wrote custom scanners to deal with intermediate formats like PDBs and try to use that also as a basis of truth to sort of find out how far back this went. As you can read and probably may have read in the media in the past, we discovered that this had existed for about 18 months. They had been inside the system for that long. Folks like dozens and dozens of people from all over the world worked around the clock from December 13th straight through to New Year's Day. We did a kind of a follow the sun thing from our Manila offices all the way to our big development offices in Krakow, Poland. So the conclusions that we drew from this, first of all, this was a really good adversary. We obviously don't have any way of understanding how it is that people like NSA and GCHQ have said that this was Russian foreign intelligence, but that is what the Spooks have told us. And so, you know, we believe them even though of course they deny it. We know now that fewer than 100 customers were affected, which is great because we originally thought that it could have been up to 16,000. And SolarWinds was almost certainly attacked because of the nature of the Orion platform itself. Like any system that does large scale management and monitoring, Orion has to have a lot of credentials to do what it does. Everything within this product category has to do that kind of thing, right? Because you're basically saying, hey, I've got a system that can log into some other system and pull stats or manage it, run scripts, do all kinds of things. So even amongst penetration testers, you know, who hack for a living, good guy hackers who hack for a living. Orion has for a long time been, you know, something or these types of systems have been something that's kind of nice to find on an engagement. Because if you can get in from there, you can pivot widely through the network. We realized also that we were going to need to completely redo all of our build infrastructure and kind of rebuild it on much more contemporary and sort of bleeding edge technologies. I mean, that's because obviously this type of thing is not going to go away. Like an attack this successful, there's going to be more of them and folks today have already alluded to sort of the wide ranging problem of supply chain kind of getting bigger and bigger. So what is the fix? And that's what we'll talk about for the rest of this talk. Project Trebuchet, which is the big cross functional, the name internal code name for the cross functional initiative that I've been the lead on is what we call a consensus attested system. Right. So, in deciding that we have to rethink like a huge portion of our software development lifecycle, we started to come up with really four top level requirements that sort of emanated up or bubbled up as through all of our discussions. The first is we need to move to ephemeral infrastructure and the advantages of ephemeral infrastructure are not unfamiliar at all to anybody attending a conference that's, you know, talking about cloud native tech and Kubernetes. So I think people understand that but just moving just straight up moving away from long live build agents gets you quite a bit more security than you would have had in the past if you're kind of always leaving the same machines that are always on. Obviously you want determinism wherever possible, which is a really interesting challenge because most or quite a few technologies don't actually support the ability for you to build something from the same exact inputs like the same point in your code base and the same collection of dependencies and get the same thing twice right for instance. If you want to build a container image that is the exact has the exact same shop from the exact same source code. You can't do it with build kit right you can't do it with Docker you need to go and find some other thing like an eco or one of these other technologies that sort of supports that. You know you can build a Java jar this way with Maven with like a reproducible flag etc but some things just don't support it yet like for instance.net you cannot yet get a completely deterministic binary. So what that means is you know you go for instance you want to download Firefox you download Firefox and you can compare the Shaw that they put online to you can run open SSL and you can generate a digest and you can say oh the binary that I got was the binary that they expected to send me these bits are good. But it's most likely that I don't know that's for certain about Firefox but it's most likely that the system that produces that original thing with that particular shot they couldn't get that same shot again right but. Wherever possible you need to try to go and get determinism and this is kind of funny because on the salsa levels there's a little asterisk around determinism because it's just sort of like not everything can do it yet. Consensus is the next one and that means basically just two systems agreeing right if you have determinism then you should be able to build the same thing twice and compare them. And the idea here is that you know if this had been possible for an attack like Sunburst we would have had kind of the regular system that everybody uses and then a side system. It's much more secure that has fewer people have access to and that thing could be a source of truth and say hey these two things didn't match and therefore some shenanigans took place. The fourth thing is proof kind of recording everything that happens in the course of a build right and you'll hear a lot about this today as people talk in depth about things like in Toto. And stuff like that that helps you kind of capture everything that's going on in the course of a software build. But yeah you need to be able to have some level of understanding of each step and ensure that it's kind of concrete. And then on top of all that we have this thing that helps us guide our thinking is on this project and this is what we call the Golden Rule of Trebuchet and it's an overarching principle that helps guide both the developer experience and our separation of concerns. And this is really important because without something that can they can handle this. Well, I should say this helped us kind of decide that certain systems just weren't going to work right off the bat. And particular SaaS offerings so right away we looked at circle and Travis and get him actions to see if those could get us what we needed. As I mentioned before, SolarWinds has got a lot of different kinds of products, a lot of different kinds of build systems and use but we're really big fans of SaaS products and the ability for developers to have a whole lot of control over what they're doing as that Golden Rule said. And we determined pretty quickly that it's not enough to just to be able to self host right like GitHub actions has a self hosted runner concept where you can take the runner. It's just this kind of opaque job engine and you can put it on your Kubernetes cluster or your VMware agent or whatever. And you can lock down the network and you can kind of have a lot more security in that posture than you would otherwise. But you're still sort of stuck running exactly what comes from that repo in that in that agent. So we decided that we would go with tecton because which is based on communities because that would give us the desired mechanics but it would also give us that kind of all important developer experience that we wanted. And to go a little bit deeper on that on why it won't work. If you think about how Circle and Travis and GitHub actions and I think most of the cloud vendors CI solutions work. The entire build definition from top to bottom lives in the repo. The mechanisms for not repeating yourself are primitive and not really enforceable. Circle CI has this concept called orbs right and that's their library well there weren't private orbs until just a little bit earlier this year if you wanted a private orb yet to have like a little hack to keep it private. GitHub actions still today private actions. You have to spam the copies around your repos and so you can't just share a single library and do it all over the place unless you want to open it up. And that obviously doesn't scale very well. And kind of the main thing here though is that there's no interstitial authority that can add anything to what the developers are doing. We don't want the developers to have to think too much about security and validation and indeed we don't want to put any of those controls in their hands right per the gold rule. We need to be able to bring those receipts as people say for every bit of the standardization and enforcement that we do and so we have to be able to separate this out right. So Kubernetes is actually really good at architectures that involve you submitting like a resource definition that you've written and then have that thing kind of get augmented and modified by the system right. You can think about things like how Istio works where it injects an envoy sidecar into every pod you don't have to think about that it just sort of happens for you. So the idea that we could easily mutate user supplied definitions was really attractive to us. And also this is a great dev experience. We're big fans now of Tecton. Tecton is not the oldest project in the world. It's still got some rough edges, but anybody that we show this to who's used to something like Circle or GitHub Actions really takes to it pretty quickly. You write YAML. You define tasks. Tasks have steps. Steps run in a container image. There's great semantics for being able to share data for passing data around surfacing up results. You can have tasks inlined or crucially you can reference heavily parameterized tasks that live in a separate repo. You can have a catalog. So yeah, that makes it easy to not repeat yourself. It makes it easy to standardize and makes it easy to enforce people using particular versions of new things that you've done. And this is how we started to be able to reason about scaling a system up to hundreds and then eventually thousands of engineers working all over the world. So I guess the kind of the TLDR here is that we can't use the SAS stuff, but we really want to preserve the basics of that SAS experience. Okay, cool. So we have technology that will let us separate concerns. It will allow us to follow that golden rule of making sure that the developers write their own stuff and they're not involved in making sure that it's validated and enforced and secured. We've got a SAS experience that the developer is familiar with. So the next thing is to tie it to GitHub. We are GitHub cloud users very heavily invested in that. And while tech time has a lot of great stuff, it's actually pretty general and some of those ways that it sort of default communicates to GitHub. We didn't really feel we're as secure as we would like them to. So we built several pieces of kit here as I say to kind of marry up. The first is we have a GitHub app that is the thing that actually kicks off builds. So a GitHub app, in case you don't know, is basically just going to catch a web book, right? You have it sitting in your infrastructure and when a PR is opened or a commit is added, it fires a web book with data. And then what this app does is it will fetch the repo. It will extract the pipeline definition from a standard location and it will sort of validate that it's ready. And then it submits it to the Tekton Pipelines controller over REST API. And then, of course, Tekton starts to spawn task runs, which are just instantiations of the tasks defined in the pipeline. And as it does, you know, that task run might want to talk back to GitHub. We keep our cluster from talking to anything but GitHub, but we don't want necessarily any given task to be able to just do whatever it wants with GitHub. So we ensure that it can only talk back through a proxy, which limits the API access that it can hit. It's got very limited access to the GitHub API surface. So even if there was some kind of compromise, it couldn't just go off and sort of send things back to GitHub under the ages of the token as it wanted. We also have a Kubernetes controller called our Pipeline Watcher, which reports results back to the GitHub checks API. So this is what reads log lines and status information from the task runs and kind of kicks it back in real time to GitHub. And that gives you this experience that feels somewhat like GitHub actions. Like you click on a tab and you can see all of the different checks that have run. You can see them in order. You can click to each of them. You can see your log output. And if you want to, you can click a link down from there and jump straight over to the Tekton dashboard. But you can really do quite a bit of your work just directly inside GitHub as somebody who's just kind of looking at your PR and seeing if it can move forward or not. Okay, so this is what the system looks like now at this point, right? We've got GitHub, the repo, and then it comes in and hits the app. That other kit box up there is the GitHub stuff that I just mentioned. And then we have a lot of business logic that lives in a mutating webhook. And that stuff that I could maybe get into in question and answer, but just suffice it to say that we kind of need to be able to insert some standardized checks that are wrapped up in custom CLI tools and calls to those get injected by the mutating webhook. And that's in part because we do a whole lot of things with on-prem, right? I know that we talk a lot, especially in this conference about containers. But at SolarWinds, we don't have the luxury of focusing only on containers. We have to do quite a bit with on-prem. So a lot happens in that mutating webhook. So then, of course, things that the Tekton pipeline magic happens and things get built. They land if their container image is in ECR or in S3 if they're built binary artifacts. Okay, so now we know that we can build something and we know that we can honor the dev experience. We can do kind of a good job giving an easy experience to devs. And they can have a thing that feels quite a bit like they're used to with CircleCI or actions. So now we need to make sure that we can record how we build it. We need to know the provenance of everything. And that's a word that you'll probably hear a lot today. It basically just means where something comes from and to some extent what it contains. And we have to be able to produce records that hold data that tells you what a build step did and what the inputs were to that build step. And those records also need some kind of guarantee. They have to be signed. You need to know that these came from a certain trusted system, et cetera. And the Intodo project is what kind of helps get us started there. How many people here are familiar with Intodo already? Okay, cool. So that's like most of the room. Good job. So that's good. And as you know, then it solves kind of a lot of the conceptual problems in supply chain security and helps give sort of a framework for thinking about a lot of these things. And it's a lot of standards or sort of de facto standards are kind of emerging out of it. And some really smart people from places like Google are working on taking some of their industry knowledge and working it into these Intodo projects or expansion proposals. So we've been participating quite a bit with Intodo, and we ended up implementing some specs and go for some of the things that we wanted to be able to use. And the main thing that we implemented is Intodo expansion proposal 6 or 8.6 proposes an attestation format for Intodo based on Salsa's attestation spec. So this diagram here is a lot nicer than my diagrams and it comes from the Salsa GitHub page. So this is sort of explaining what exactly is meant when we say an attestation when we're talking about a Salsa attestation or an Intodo 8.6 attestation. So you can think of it really as just three major pieces. A subject is the thing that we're actually building. A predicate contains like, it's just like a recipe, like how we build it and what we use to build it and method and the ingredients. And then of course the signature is the guarantee, right, that some system produced this and it is signing saying, hey, this is me and you should trust me because I signed it. And I know there's lots of t-shirts of this con, but if I leave this con without a dancing robot goose t-shirt, I'm going to be really, really sad. So if y'all know where to get one, let me know, please. Okay, so great. Now we know that we've got like a format for how we're going to have these documents, like a standardized attestation format. And, you know, again, that go back to that golden rule and says that devs can build what they want, but their workflows, their authoring won't have any role in producing those attestations, right? So those are not going to be part of their workflow definitions. They have to somehow show up some other way in the system. So enter tecton chains, which is a newish controller for tecton Kubernetes controller in the tecton ecosystem. And basically just watches the Kubernetes API for completed task runs. So when a task run is done and it's just sort of marked completed, then chains will produce an attestation and sign it and write it out to the database. And this is the mechanism that ensures that no matter what a developer defines, all of her tasks and steps are going to get captured and written out as documents, and then they can be used from there in other ways. And shout out to my colleague, Frederick Skrugman, who wrote some code to get the Intodo 8.6 attestation spec into chains. So chains support several different attestation types, but the one that we've been kind of centering on is the Intodo one, so extended that and added it. All right, so now our diagram has this extra bit, right? We've got tecton chains in place, and it's sitting there using a KMS backend. Never manage your own crypto material if you can avoid it. And it's using that to sign and stick those documents into a document database. So great, we've got a pipeline. Pipeline is easy to use. It's got attestations. And it's got a database which means that we can have clients ask questions and say things like, hey, did I get an attestation from the pipeline? Yes, maybe there's some other kind of attestation you might want to put in there around vulnerability presence or not, right? You can query this database to kind of ask those questions. So this is the basic system. But of course, we need more than one system to agree, right? As we've said before, a unitary system is bad. We had a unitary system with sunburst, and we had no mechanism of validating, and bad things happen. So we really want consensus to get our consensus attested system. We need more than one system in place, and those need to agree. So we need another thing that looks exactly like the other one. But we also need mechanisms of making it more isolated than that first one, right? So this is our final diagram. And you can see here that on the bottom we have now a validation cluster. So we have two completely physically separate clusters. And we have the standard cluster, which our devs will have access to, and which GitHub sends its webhooks to. And then we have a validation cluster, which looks basically exactly the same, other than it gets its marching orders by pulling cloud events off an event bus. So what happens is a webhook request comes in to the GitHub app. The GitHub app does all of its validation stuff. It will send one copy off to its own Tecton pipeline controller, and then it will take its thing, wrap it up into a cloud event, and emit it on the bus. And then the GitHub app is just configured in a different way in the validation cluster. So it consumes off of that bus instead of off of a bound port getting HTTP. And we actually don't allow any ingress whatsoever into the validation cluster other than obviously what's necessary to manage it as a Kubernetes cluster. So that gives us an asynchronous sort of parallel system to do these builds with. And you can see here that each of them has a copy of Tecton Chains, each of them has their own key, and they're both writing things into the same document database. And then from there, we actually have a little app that ETL's that stuff into Postgres database that we use to have a variety of other clients kind of ask questions of. So you can sort of imagine that all of this out of station stuff is great, but in a document it's not as easy to kind of blow it out into a relational structure and understand, you know, like what builds contain which thing or how many builds came from a given gitschaw or, you know, eventually we'll go a little deeper and be able to have some, marry up some threat intelligence data with some build artifact data and ask sort of more detailed questions like that. But this little green box here on the right of release gates is basically just things like a Lambda job, for instance, that might ask, hey, can I pick something up from the Trebuchet S3 bucket and put it in the on-prem release platform release bucket? No, S3 bucket. And if, you know, it would say, hey, do I have two systems agreeing on this thing? Do I have the vulnerability posture that I want? If so, yay, I'll do that motion. Another one would be like a Kubernetes validating admission webhook to say, hey, this container image that you want me to launch, has it been signed? Has it been signed by an authority? I trust. Does it have an out of station from each of two pipelines? If so, yay, I will out to be launched in the cluster. If not, then I won't. In certain situations, we can't do something that we need to do on Tecton. And so in that case, we actually use Tecton as an orchestrator for other infrastructure via an agent. So we're pretty heavy Amazon users. So the agent in this case is baked into an Amazon machine image, an AMI or AMI. And the Tecton task actually launches an EC2 instance with that AMI. It will boot. It'll look for a task on an event bus. It'll perform whatever tasks it's been told to perform by the Tecton task. And then it will send results back on the same bus. And if the Tecton task gets message indicating it's all good, then, you know, it will go ahead and shut down that entire EC2 instance, because the artifact that it needs has been stuck in a known location on S3, and that EC2 instance is no longer needed, right? So this has us using what would normally be long-lived VMs, but we're keeping them ephemeral by controlling this stuff from Tecton. And this is, we have this need because again, like we're not just doing containers. I should give you a little bit of background. So so far we have been, we had kind of a hard deadline to ship a bunch of things out of the new environment this summer. And they were all on prem projects based on Java, and they all have a bunch of different types of install targets. I'm talking things like Mac OS, you know, OVA, like VM images, like all these other things that are just basically can't do on Tecton at all. So we had to come up with this concept. And because it keeps to those original four tenants that I mentioned about ephemerality and to whatever extent possible, reproducibility, et cetera, we felt like this was fine. We'd do this outside of Tecton, but Tecton still knows about it, right? So the things that are happening are still getting captured by G. Some miscellaneous details. So yeah, we use OPA to do some vulnerability analysis. And this is kind of in the early phases, we're thinking about just sort of defining policies per project around allowed vulnerabilities, exceptions, et cetera. And then we've actually been experimenting with producing our own attestations in a slightly different format for things that we have determined to have a decent posture vis-a-vis vulnerability presence. We're pretty heavy users of a tool called JFrog Artifactory, which is where we keep all of our things for, all of our dependencies for Node and for Java, et cetera. So we're continually scanning that with one of their scanning tools called X-Ray. And then we'll take an Sbom file, look at everything inside that Sbom file and produce a vulnerability report from X-Ray based on the things inside there and then compare that with policy for that project in OPA and you kind of get a thumbs up or a thumbs down, right? And so if you get a thumbs up, that means you can write an attestation into the database about those vulnerabilities or the posture of those. We keep everything locked to GitHub through the pretty nifty fact that GitHub continually publishes over their Meta API, the CIDR ranges that all of their webbooks are going to come from. So the ALB, the Amazon application load balancer that we use has a great feature where it allows you to sort of dynamically keep updated like the allowed IP ranges that you're going to allow to talk. So we're able to keep it constantly locked to just GitHub. We don't have any other Ingress at all into the standard cluster. And as I said earlier, no Ingress out of the VPC whatsoever other than the GitHub. So if we were compromised, that vector of compromise would have to involve in some way talking back to GitHub, maybe pulling some bad thing from GitHub or whatever. But as of about a week or two ago, we've actually started red teaming Trebuchet, which is pretty fun. So I've got somebody sitting there trying to see if he can escape from containers and things like that. We'll see what he comes up with. And now some concluding thoughts, and I know I'm very close to my half-hour time. If you haven't already, like you will, I should have said you will, like you will probably, like you will experience a breach. It happens constantly. It happens more and more. It's really hard to secure software. So just be humble about this. I know that that's kind of facile advice, but it's worth mentioning. It's going to be difficult. And the people that you're going to work with are going to be from all over your company. One thing I've noticed in the course of all of this is that dev teams and internal security teams don't talk enough and that really needs to change. Folks on both sides need to make friends with each other, understand each other's worlds and help each other out. That's something that we on the Trebuchet team have tried to do quite a bit over the last few months. We've staffed up, as you can imagine, in the wake of this breach quite a bit within our CISO's office. And all of those people are now pretty well read into everything we're doing. They understand it and they're involved with it. And that's only going to bear dividends for us over time. Move security left as much as you can. Put things into the hands of devs. Devs know the tools. They know the product. They know what they're building. Give them the means and the training to understand how to try to prevent themselves from inserting security vulnerabilities in the first place. Train people on secure system design. You're going to get back, again, massive dividends compared to the investment that you do. And as the ancient saying goes, be excellent to each other. I like to leave this slide up here for a couple minutes because there are a lot of amazing people that helped so much throughout the last 10 months. Like I said, it was really grueling. This collection of folks, some are current, some are former SolarWinds people, some are in this room, have done just amazing work. And this has been probably, I joked earlier, it's 10 months that felt like five years. And folks on this list have done things from implementing in Toto specs and go to holding hands with extremely freaked out people virtually over the phone for hours and hours in the immediate aftermath of the breach. So I know there's people that I missed, but yeah, everybody everywhere, whether you're at SolarWinds or whether you were affected by this or whether maybe somebody in your company just ran in with a waving around a ZD net article saying, Hey, we got to do something about this and it ate your world. Yeah, thank you too. And also I need to observe that today is Indigenous People's Day. So yes, go out there, teach peace, expect justice. Thank you very much. Thank you for this questions. I'll take questions. We have a few minutes for questions. Let me do one virtual question and then we'll take a couple from the audience. So the first one is, do you have variants across your parallel pipelines? Anything to prevent it being easy to poem both the exact same way defeating the purpose of the parallel pipelines? Variants across the pipelines? No. So the only variant is that if I understand the question correctly, in the validation cluster, rather than getting direct communication from GitHub, it gets asynchronous communication from a message bus. So there's nothing going into that second cluster that hasn't at some level already been validated before by a slightly lower assurance system, if that makes sense. So we double produce everything. We did originally try to have a reduction in the things that we did in the validation pipeline, but it was just too logistically complex to try to think about how to mutate out certain tasks that didn't make sense within the validation context. So eventually we just decided we'll do the exact same thing in both places. And the only thing that we do is we'll like update, we'll insert like a change to like Koniko to like not push a container, but it'll still, it'll still build an image and calculate check sums and produce attestations around that. And then audience questions and just speak really loud when you ask the question. Yeah, so the question is about trade offs conversations around trade off in time versus, you know, trade off some security if I understand it right. So that really has never been a thing as part of this because the nature in the high profile level of this. I don't know if anybody caught this but you know our CEO and our former CEO, both had to testify in front of Congress twice. And they literally promised the system to the United States Congress on live television. So there is no one who really cares. I mean, I care deeply about the developer experience obviously and and we do our best to make it as fast as we can. But frankly, our experience so far has been that like modulo some temporary problem with like not enough Kubernetes resources. These are as fast or faster on tecton as they were doing similar motions on Team City or Jenkins. So we absolutely we instrument everything you know we do all the things that you do to try to make sure that any cloud service runs as efficiently as it can. And we don't want anybody to have a bad time with this. But security is like priority one two and three and that's just the way it has to be right now. Does that answer your question. Yeah, so we're right so the question is, what are we using for KMS. And so the real answer to what we're using from KMS is we're using six door so my colleague here Cody Soyland actually added some support for AWS KMS back end. The, a lot of the stuff in from six door is fairly GCP oriented. So there was already support for KMS there and for GCP KMS. So we on our team added support for AWS KMS and then it's, you know, if you have a, an I am role that allows it to request a signature, then it just is allowed to request a signature of some data and that's, that's basically, you know what's happening there. So you could conceivably run this on any, anything, you know, as long as you wanted to do the work to make sure, like, I don't know, maybe, I don't think six door supports Azure KMS, but maybe it does now. Okay. Yeah, so you could, you could conceivably build this pretty easily on any of the, on the big, you know, the big public clouds. We just have a lot of stuff in Amazon. And, you know, that's kind of what we're used to. There are DevOps group is managing the KMS stuff, but it's, it's all adhering to a collection of requirements that are emanating from our CSOS office. Now it's break time. Thank you. Trevor, we'll meet back here at 1030, 20 minutes, 20 minute break. Thanks everybody.