 Oh, I didn't even notice that. Oh, wow. Yeah, ready to go? Okay. Alrighty, let's go ahead and do this. I'm going to raise my voice slowly because I can project pretty well. So hopefully the speakers don't do anything out. But hello, everybody. Good morning, Wednesday. Welcome to the second day of the conference. I think it's been awesome so far. Hopefully it's been awesome for you. I have a conversation. I will take questions at the very end. But I don't like to present. I like to have a conversation. One quick rule before we get started. It's the only rule I ever have. It's made Beyonce rule. So if you like it, you should tweet on it. At Bill Benzing, always feel free to tweet something. Whether you like it, you don't like it. You agree, you disagree. Absolutely, let the Twitterverse know. And I'd love to have a conversation. So a little bit about me. I'm tacky enough to quote myself. So there is a quote that I constantly change in the context of something I'm doing. But a bit of my background. So where did I come from? I actually gave up my engineering degree through college simply because I wanted to party. So I went into business. Of course, I get back into industry. And all of a sudden, now I need the engineering skills. As I came up through industry, I always wanted to write software. But I always found out things like security and compliance. People would throw in my way as if it was the block to say I don't want to deal with you. So I took on these roles of like, how do I solve this? And then once I realized it, security and compliance in a lot of organizations is hollow. It's security theater. It's governance theater. It's compliance theater. And I've always found this interesting. And so as we start progressing, we hear modernization, containers, so part of what I like to do is part of my understanding is like, what does it really mean to do this? And then how can I teach it back to people as well? And so that's a bit of my background. So I always like to say, sometimes I got my degree from the University of YouTube when I wrote software, which probably isn't too far from the truth for a lot of people in the industry these days. But that's a bit about me. I am at Red Hat. I'm a managing architect in the North American public sector. But I do want to give you a bit of a bottom line here. And this bottom line, let's see, maybe I switch sides. Whoops, there we go. Let's do this so I can be on here in line with this. People should not execute the governance process. Up front, people should not be executing. I think in some in some worlds, that's going to be very controversial. What I will say is machines should be executing the governance process. People should design, develop and codify the process, and things should do it. Now this is not about getting rid of people in the governance process. It's about optimizing humans in the governance process. What are humans good at? Creating and thinking. You think about what different states does from different animals. It's our prefrontal cortex, our ability to simulate future situations. Like, that's what makes us human. That's what makes us unique. Not going through SonarCube reports saying yes or no. Machines can do that. So as we think about what we mean to optimize governance, what I talked through today, I want to give you this approach about how to do this. How I think this can be done. I'm going to show you some tech examples of how we're doing it in the upstream. I'm actually just a little promo here. I got the opportunity to write a book about this. It's published here in September by IT Revolution. So if you've read the Phoenix project, the Unipron project, DevOps Handbooks. Actually, Jen Willis is a co-author on this. So a couple of folks in industry have been doing this, and we wrote a novel. It started as a novel and turned into a novel. But the whole idea is to try to teach this as well in that same vein. So this is coming out shortly. So basically what I'm trying to say is, as we're doing this, people are doing this throughout the industry and highly regulated organizations. So it's not just a thought project or something people are thinking about. People are actively doing this in one form or fashion. And this novel is about how it's been done throughout the co-authors and their organizations. So it's not something new. We've been talking about this, well, I don't want to say we. A lot of these authors have been talking about this back as far as 2015. So these are papers that have come out of the DevOps Enterprise Forum that started on the far left with an unlikely union between DevOps and audit. At the end of the day, and I won't want to say any highly regulated organization, in today's environment, security and compliance is a feature of your software. Period. It has to be treated such as a feature. I like this because if nobody's doing this, it's a love letter. This is probably the only love letter you can write in your company and not get in trouble by HR. And so what I like to think about this is this is really just an admission from folks in the DevOps realm, from folks in software saying, hey, security compliance audit, we've left you behind. And we want to commit to bringing you forward. That's a bit about, again, what this talk is today. So as I keep going through when I put these examples out here, this is a, if people are doing this in your organization, awesome. And people think it may be hyperbole. I really want to set the stage for this is not hyperbole. This is a very actual outcome that can happen today as you leave the conference. So a bit of my dream, like poof, rabbit out a hat. How can I just deploy software and it go straight to production? All I want to do is commit. I want to commit and go straight to production. I don't want to deal with anything in between. How can that happen, right? And so that's always the how do you pull the magic rabbit out of the hat. I'm going to start off because I'm going to use some terms today. A quick epistemology, I don't want to assume that my definition for these terms is your definition. Governance. Governance refers to security compliance and audit. So when I say governance, I'm referring to the specific three aspects there, security compliance and audit. So I'm going ahead and tell you what I'm going to tell you over the next, like 80 slides. Governance is a current bottleneck for software delivery. I think a lot of people have got software delivery out there. Some people got forms of DevOps. They got the ability with some basic mechanisms to go from commit to production. Not in probably a great way, but they can do that. Governance right now is the current bottleneck for software. We must modernize governance capabilities. So what do we mean by modernizing governance capabilities? Modernizing is automating the governance process. It goes back to the beginning, the machines. But one big but. It's more than just automation. It's autonomous. When people say modernizing, what they're really saying is autonomous. And I want to explain to you why and probably one of my favorite books. Is anybody familiar with this book here? I see a lot of people reading through this. Chapter seven, the evolution of automation at Google. If you haven't read it, read it. If you've read it, read it again. I think when people are like, it just talks about Borg and Kubernetes and this is what I love about this. And it's more than just Borg or Kubernetes. I'm going to go ahead and do what I'm, you know, they tell you not to do in speaking classes, which is read directly off the slide, specifically to get this point across. And the very, the gray stuff if you can't read it. It says, for SRE, automation is a force multiplier, not a panacea. Hint. Automation is not a panacea. I think everybody, a lot of folks know that. Just don't tell a lot of vendors who show automation software. Of course, multiplying force does not naturally change accuracy or where that force is applied. Doing automation thoughtlessly can create as many problems as it solves. That's where it goes into the humans and machine. Just automating for automation's sakes doesn't really work. Here's what I like. Therefore, we believe that software-based and automation is superior to manual operation in most circumstances. So there is an admission that there are some circumstances where you need manual input. But neither option is good. At the end of the day, a higher-level system design requiring neither of them an autonomous system is there. So I start to think about automation, the next level, the evolution of automation, and I think about modern governance. It's autonomous systems. So modern governance is a high-level system design. That's what it is. It is a high-level system design. Modern governance is autonomous governance. So as I'm going through and sort of building a bit of this argument, where the basis of where this comes from, cool things for folks that actively use Kubernetes daily basis, like I just declare what I want my application, how it wants to be hosted. I don't imperatively tell it how to host it. I just say do this, and it does it. Sort of like a little black box. Start thinking about your governance process as autonomous governance. This is where the thought comes in at. So doing this, though, we can't just say we're going to autonomize this. We have to resolve the impedance mismatch. At the end of the day, governance is a human affair. So there is a big impedance mismatch between our traditional form of governance and our current form of governance. As we go to autonomous, how do you resolve what the old folks used to do, the old compliance governance, how they operated? How do you resolve that? That's what you actually have to resolve. So, arguably, it's not the technology that's the issue. It is the human process. And we'll focus a bit on that here today. Resolving impedance mismatches. I'm going to riff on this for a second. Who here has read the book The Goal? Or the Goal? A couple of people? If you haven't read it, I highly recommend it. Anybody listen to Dr. Ileha Goal's I'll call it a podcast, but I guess they weren't podcasts back in the 90s. It was beyond the goal. It's this audio recording. Anybody read this or that? If you haven't listened, and this is actually written straight from there. And it gives an example of people adopting technology. Because when you adopt autonomous governance, although Kubernetes is not autonomous governance, but this is an example of autonomy, you just can't adopt. You have to ask four questions. What limitations it diminishes? What are the old rules you used to operate by? And more specifically, what are the new rules? And I'm going to give an example that's not related to this right now. He actually talks about what's referred to as MRP, or manufacturing resource planning systems. What these do is, let's say, the chairs that we're sitting on here today. There's a demand for these chairs. And they're made of multiple components. So an MRP system does a bunch of math to figure out when and how many of the bolts and nuts and everything you need to be at the manufacturing facility based upon how long it takes to get there and how long it takes the manufacturer to get to customers. It's a bunch of dependent calculations. So back in the 60s, they used to hire, like, 40 people to sit in a room on spreadsheets and do these calculations. And what that did is they'd buy, like, three to four months of inventory. So think about three to four months of inventory. Let's just say it could be a million dollars. A million dollars of cash stuck up that's not being invested elsewhere in the business. So what happens with the concept that the MRP system came out? The MRP system did this all for you. In a matter of hours, it did what 40 people took about a month to do. So its power was decreasing that time and, you know, helping increase the efficacy of that calculation. The diminished limitations. Now, this is the key. Instead of having to order four months at a time, they can now start ordering two weeks out. So instead of having a four-month planning cycle, their planning cycle is now a couple weeks to a month. That reduced the amount of inventory on hand, the amount of cash in inventory from, let's say, a million dollars down to a couple hundred thousand. So imagine, like, what you could do with $750,000 extra that just freed up. That's the diminished limitations. So the old rule that used to operate by, where you had these people in there doing all this manual stuff, but now you have these new rules. You didn't need those folks in there. And this is where companies like Black and Decker understood it. So Black and Decker in the 60s, they came to prominence simply by adopting towards the new rules using the MRP systems. Now, there's other companies that bought these MRP systems that they didn't do the new rules. They still kept the people in there. They tried to do all the stuff, and their total cost of ownership just went out the roof. They saw no savings. So as we start thinking about adopting automated governance and some of the technology today, you really have to think about the new rules. And it's getting back to pulling the humans out of the middle and optimizing the governance work for what humans are good at, and it's trying to figure out what are the things we should be doing to enhance our governance process, not to be the cogs in the middle. So agenda today, I mean, that was just setting everything up. We're going to talk about the problem, how to solve it, I got a solution with a demo, and then at the very end, I know I've got about 40 minutes, I'm going to have a recommendation. So the problem, most organizations, what is governance? Governance is toil. I call it security compliance and audit, but at the end of the day, it's toil. And by toil, what do I mean by toil? Again, back to my favorite book. I love this book, toil. The kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and scales linearly as the service grows. That's toil, right? So if I got a new couple of new applications, I got a higher new auditor just to go do paperwork. What I love about this is what if I told you that the humans in your systems are a bug of your system? Well, I'm not going to tell you, they're going to tell you. If a human operator needs to touch your system for operations, you have a bug. So if I have a human that has to touch my system, for example, I have to audit or apply the governance process, that's a bug. And of course, normal operations change as your systems grow. So think about our current governance processes. Those are fundamentally bugs, and I'm not calling the people bugs, but our system writ large has bugs inside of it. There's two types of toil, governance toil and delivery toil. So as I start going through this, what is governance toil? This is all that many repetitive work, just the humans turning like that, that cost behind having humans there, but also the variability, it's not the cost, because people like, oh, people cost money, it's not that. It's variability. Variability is a risk, and variability is actually where cost comes in. So people know through automation, when you automate it, yeah, you reduce time, but you reduce time because you reduce variability. It's an outcome. Delivery toil. Now, yeah, you have the governance process, but what about delivery toil? You have to do it for months, because you don't know what you need to do to get to production, but somebody told you, you got to go talk to this person. Has anybody been there? Done that? Lots of people? Yeah. Most of room. That's the delivery toil. Imagine what you do, the wheels you spin, just because you can't figure out what you can't figure out. There's nothing clear, it's ambiguous, that's nothing. Those are the two types of toil that come out of the governance process, and that's what we want to resolve. Now, because of this toil, actually increases risks. No way. That's my home alone. My best home alone, right? Anyways, so I have the numbers to prove it. I'm going to show you this, and some people may have seen this before, but I'll go back here so you can focus on that. So let's say your organization has really good hygiene, and is really good hygiene at DevinOps. Therefore, you can take this small chunk of work, and you can get from idea to production in eight hours. But before you could actually go to production, you need to go through a manual compliance and security review. This review takes 40 hours to complete. And I know for some folks, they wish it maybe only takes 40 hours. But that's 40 hours how long it takes to get that review done. So let's assume you're as good as the perfect, you're really good with your commits. And let's say the probability of failure for one commit. It doesn't matter, just for one commit is 98%. So once you commit up and do something, whatever you're doing, the feature you're doing, the probability of success, the probability of success is 98%. So as you're doing this, what happens is you put up there, was that five independent changes? We're going to talk about five independent changes. I'm going to talk about my math here, five independent changes. So what's the probability of my five independent changes? Now that they're slowed up by 40 hours, what's my ultimate probability of success? Well, that ultimate probability is actually 90%. So you take that 0.98 and multiply it five times, 0.98, 0.98, 0.98. So if anybody's familiar with binomial math, that's basically what you're doing here. So I'm finding, so what's happening, I'm going to do a 90% rate of success. So simply because I'm being held up, I can't do a one-piece flow, I can't go straight to production. I've now reduced my efficacy from 98% success to a 90% success. So let's give a bigger example. This one here, say it takes 16 hours to get something to production. It's a little more realistic. Each one's independent, it's about 25. And so it overtakes you two weeks to get one thing to production. And you're backlogging. Now that you have these 25 things that keep backlogging, because what's the most common thing people do? Let's batch this. Everybody here, let's batch this. Let's just batch this together. Well, this is what happens when you batch. There are five independent changes. And each of them, let's say they're still fairly good. They're still a 98% of success if they just, if each individual one went to production. But now, because I've batched them up, I have a 60% rate of success. That right there is how our bottlenecks that are meant to reduce risks increase risks. Does this sound like, is anybody going to production like, oh, I can't understand how these failed, then all of a sudden the system just blows up, that's why. Now, let's say 98% is more like 85. Take 85 and raise it to the power of 25 and then subtract that from one and see what you get there. You're starting to get below 50% success rate. So less than a one in two chance which you're putting into production is going to be good. That right there is why, there's some math right there on exactly why this is the problem. So how do we go ahead and solve this? Right? Let's automate it. No. Autonomize. So how do we autonomize governance? Five guiding principles here. Clear all the ways you can watch these. Collaboration. Remember, writing software in any organization, security compliance and governance is a feature. So you have to collaborate as if it's a feature. It's an end user requirement at the end of the day should be treated like a user requirement. If you're testing what are you doing there? That's what you should be doing as well. Next, enabling constraints. Now, there is actually there's a term, a biological term called enabling constraints, but the idea is to constrain a system to enable it to move in a certain direction faster. So what you want to do is you look at, okay, what options do I take away from my system, broadly speaking, so I can enable my engineering organization to move faster. Requiring explicit evidence. So when somebody says, okay, what do I need to go from point A to point B? You sort of need this and this. No, I need explicit evidence. This is where you're getting into machine readable. Something that can be interpreted by machines. Holy cow. Wow. I don't know what I did there. That's a big skip ahead. One second. Okay. We'll go to zero trust and do ephemeral. Treat governance and execution as zero trust. At the end of the day, trust but verify. Nothing gets through. You think about zero trust, which is identifying based upon identity and its core principle, right? So something requests the usager, the mutation of resource, whatever it is. There is something in the middle that says, okay, do you have the right, the authorization to do this? Think about it that way as well. And implementation must be ephemeral, item potent, and immutable. So everything we're doing is ephemeral, item potent, and item potency is key because how many times have you given, how many times could you, are you having the situation where you give the same outcome, say you've done some scans, you could give them to two different auditors with the exact same controls and they'll come back with two different reviews. When we'll say it passes, one says it fails. That's more common, right? That's not item potent. We as software engineers, everything should be item potent at the end of the day. So we got to think about it as item potent. Our key architectural themes here. Externalize policy execution. So what we want to do is right now is as we go through like our CSE pipelines, you got whatever, all the tools have their own way to manage policy to say it stop. What happens when you're running multiple stacks, multiple languages, and your tool sets are north of a couple hundred tools. And these could be small things that are just they're ephemeraly that run, do a scan and drop, but they still apply a policy. That's a lot of cognitive load. That's a lot of overheads. The idea is how do you externalize that? How do you create a control plane for your policy application? Trusted agents. So the idea is getting human hands out of the middle. Nobody should be in the pipeline. Almost like Salsa level four hermetically sealed. Like nobody should be in there while it's doing it. It should only be code as code that's executing. Observability. Not just knowing what is going on, but what should be going on, that's not. I like the term red, green, and black. Like red is something that's failing that you know should be going wrong. Like what do we don't know about this? Like a whole set of controls. I don't know if it's green. I don't know if it's red. So it's in a black hole. And when you start down this process, which you'll find out is that big black holes, that is the most of what's going on. And the key game to play is how to get all of that, those black holes to red or green. It doesn't matter if it's red or green at this point in time. It's just that it's not in the black hole. Now convergence. Distilling process tools, everything into standard reusable cutting concerns. I'll talk about golden paths today. But this gets down to bringing things into standards. People hate the word standards, but I'm going to use on-road and off-road. Because it's not something you force down somebody's throat. It's an optionality to reduce the cost to them and allow them to make a trade-off decision. So what we're doing is we need to think differently. We need to go from subjective to verifiable. Right now our change management processes, they're completely subjective. Whether it's explicit or non-explicit. And they determine whether it's their coffee they had that morning or what not is whether you go to production. And sometimes our governance process, our governance theater is really who can escalate the highest. So it's not really risk management. It's just how quick can I get to my VP to tell the story about how it's impacting business. So I don't have to deal with 50 other people who are just telling me no. That at the end of the day is our governance process. So to achieve continuous verification, actually I'll go back one slide. Verifiable. Going from that to continuous verification. So as we start to think of the autonomous, this is where it gets into a continuous verification. I want to commit and I always use it within 10 minutes. Within 10 minutes, commit and be in production. One piece flow, 98% success rate for that one commit doesn't matter. Smallest possible, maybe it's changing one word somewhere and that's constantly going to production. The idea that single piece flow. So to achieve this continuous verification we must autonomize our human control gates. So what do I mean by our human control gates? Our human control gates. This is actually from the enterprise, the DOD Enterprise DevSecOps Reference Design. So if anybody's in the public sector seeing this, it's actually a really good amalgamations by a 100-page log of some leading DevOps thoughts. But this shows you here, as we start to think about the gates, it's those little diamond things in the middle, build, test, release. That's basically what at any point in time there's some tools to allow you to go to the next step, but sometimes that's where the humans come in the middle. That's where is what happens. So how do we autonomize these human control gates? So, yeah. Before we go on, I'll give some quick definitions around how we're going to autonomizing. Evidence, as you'll hear me using these word evidence, this is structured on structured data. So I run a scan, something comes out, that's evidence. Attestation is a signed something, a signed set of evidence. So I got a center cube report. So if people are familiar with the six-door ecosystem recore, I sign that store record up there. I have a signature for that artifact, like I do signing a container, any type of jar, any type of artifact. Policy, this describes the outcome. So policy, of course, is this is what I expect something to be. And then audit, this is your failed. You're comparing policy to some expectation. You're comparing basically attestation to your policy and saying, okay, does it pass or does it fail? So you'll hear me use these terms, I want to get those out there. But this is how we autonomize the control gate. So we have two activities we do here. We have an evidence and attestation procedure and we have a policy and enforcement procedure. So I'm going to walk you through exactly how this happens. To do this properly, we'll walk it through, but I'm going to go into the concept real quick. I almost forgot. I talk about this so much that I almost forgot I got to introduce a new concept here. We need to govern this contract. So as we're going through this, there is a new concept of a governance contract. Now let's talk to this. Now this is critical. The reason why I bring this up, and there's like a whole 20 slides dedicated to it, is specifically this is how you autonomize your governance approach. So what is this governance contract? At the end of the day, it defines the semantics and syntax of our governance primitives. Notice one thing I'm starting to do here, and I love the example. There's a chapter one of the SRA Handbook. It's been trainer sloths, basically. I forget the exact quote. It's like we're treating our software, our infrastructure like a software engineering problem. You see what we're doing here? We're treating our governance like a software engineering problem. I think that's key as we're going through. I don't want to call that out. Because as we're starting to think about this, we're autonomizing. We're applying basic software engineering principles to solving our governance process. So this contract is the semantics and syntax that define our primitives. Has anybody ever thought like, is anybody scratch ahead and say like, what are the dimensions of some of my other domains? But what's my primitives of my governance approach? Not many people have thought about that. But it's the same domain, same way to apply domain knowledge. So it's how we codify our governance specifications. So we start to think about this. These syntax, these primitives, this is how we're going to codify. And again, we're making this explicit. We're making it machine readable. We're making it so machines can execute, and humans just codify what it is. Code 2. And machines need something to be able to interpret. So really what we're doing here in the contract is we're just basically saying, okay, whatever happens here, we're going to create a governance contract around whatever happens here. So we can determine if we pass here. And for all gates, this is not just testing. You'll see my thing focuses on unit testing days. Not all of just unit testing. But the whole idea is we need to make sure whatever happens here, this can be automated. So automate the collection at the station so that we can automate what the humans usually do here and provide the approval. So here is what a form of a governance contract looks like. I'm going to focus on unit testing today, and we're going to go through it. But a couple of things I want you to notice about this. First, you don't know if this is Java. You don't know if this is Python or C. Frankly, because it doesn't matter. My policy says I need 100% unit test coverage. It didn't say unit, it didn't say that accept your C stack. It says for everything. So that's what we're looking at. Our governance procedure, that's our unit testing. That is our procedure we're going through. This right here, that's the control gate. We have a unit testing control gate. Our procedure element, see here it says test. Time, test, errors, skip, failures, those are just elements. What it's looking at, this is the total amount of tests. So this is just a specific output I'm looking for. So I have my unit testing procedure. I'm looking for outputs of how long it took, how many tests I ran, how many errors I had, how many I skipped, and how many I failed. And our procedure value, that's, of course, the value. So how many tests we have, we have three here. Very simple. There's some other metadata in here you see, name and description, but the end of the day, this is the core of the governance contract. And this looks probably, looks very familiar to a lot of people. You're like, okay, this isn't like a big, you know, this hopefully isn't mind-shattering. This should be like just a duh thing. I'll be honest, we stole this from all, how we operate from everywhere else. Just took those ways we've autonomized the maintenance process. So how was this governance contract created? Let's go through that here real quick. What you'll see is this unit test. So now we know this was a MAVEN test result. What we did is we took the outcome from the MAVEN test result and then we serialized it into this. So that's our first step. This is the evidence collection process. So we go through evidence collection and we serialize it. So two procedures here you're going through. You're scanning and you're taking the scan outputs, whatever they may be, and you're processing it and persisting that. So first you're going to do is you're going to take it, you're going to pull it, you're going to serialize it. But the second part you're going to do is an attestation. This is where you start to develop pedigree and provenance. So we start to think about software supply chain and what software supply chain means. It's really pedigree and provenance. It's being able to see where my artifacts came from and how they've traveled over time. And so as we start to see about this, we're now applying this to more than just my container, my jar, whatever it may be, it's provenance artifacts. So think in a perfect world if you had your software artifact and your repo, and you had your GitHub and your software artifacts, but then through all your CICD process, you had signed collections of evidence to show as it went through and how it applied to your policies. Think about what kind of trust that could establish from a community perspective. So I know a couple folks out here in the room said it would be person A and person B. How do I know to trust person A versus person B? What if they showed all of their stuff and how they've evaluated what that artifact is and the repository to start to establish trust? And that's about visibility. So that's key here as we're going through, and that attestation is tied back to an identity, right? So then I know who signed that and was it valuable? And this gets back into the six-door and some of the short-lived keys in public and publicly available information around your keys as well. So how is this governance... or how is this governance contract already against policy? This is how we apply it. We're pretty familiar with OPA, a little rego example, very similar. All we're doing is we're saying, hey, our workflow, well, this one's for static. We'll look at the unit code, look at our unit tests. Let's go to our attestations and let's look at test quantity. Is test quantity equal to pass quantity? So at the end of the day, I don't know the pass quantity, but if I had three and then the pass quantity, which doesn't exist on there but should be on there, does it equal, right? So that's all I'm saying. I want to run unit tests, and that the number of unit tests I have need equal the amount of past unit tests I have. That's simple, apply it that way. And this is how you start to externalize. So now what I've done is, let's say SonarCube, for example, will say, nope, you're not at code coverage, 80% or whatever it is. I've now taken the power away from SonarCube to do that. I want the SonarCube outputs. I want them to scan, I want the outputs because that's what I'll serialize as one tool or the Maven outputs. This is the control plane layer. So where I'm pulling out and allowing the application of the policy on the external side, and the beauty behind this is, and this is where cognitive load, and I'll always hit on cognitive load, I reduce the cognitive load for the organization. So nobody needs to know how to operate SonarCube, Maven, X, Y and Z. All they need to know how to do is contribute compliance and security as code in this fashion. So codify the expectations, retake what the humans do and say what types of controls get the software engineers involved, build the procedures to evaluate those, and we're done. And now you've started to reduce your variability. So this is how you're getting that one, that single piece flow. Big policy force, the idea here is, yeah, you're evaluating it, but you're also evaluating a bit of the provenance and pedigree. So what you'll see here is validating hashes. So I'm actually going to retrieve the signed evidence. I'm going to retrieve a policy. My policy can be signed, too. First, are my signatures valid? Yes or no? Yes or no? If no one the other thing, it's valid, apply the governance and then go through do your audit and pass or fail. But now we're adding an extra step, because we're autonomizing it, we're adding this pedigree and provenance. As we start thinking about the supply chain and supply chain concepts, what if we instead just start sharing artifacts? We start sharing everything underneath. I forget the term, but it is one of the philosophies that underpins cryptography where the public-private keep here. There's information you want, but you still keep some private. There's still some private you have, but it's the same concept where you're sharing a bunch of public information. And so this is a bit of how you can establish that trust there. So, oh man, I am definitely not too good with this thing. So what does this look like when it's applied to software delivery? I'm glad over here. All right, I give up. Let's go back. What does this look like when it's applied to making sure it gets back in? It becomes an evidence collection mechanism. Your continuous deployments are now validable. So as I go to CD, that gate in the middle says, can I go to production or can I go to this environment? It's all, you can audit before you audit. You go through the process, but what's even beautiful is the process of your deployment is now more evidence for any type of subsequent action. Now what you've done is you've 100% autonomized all of your commits to production. So you've committed, like the point is where it goes through and everything is in there. It goes through compliance of code, security code. All of your tests, you start to think about everything as like, I think of things like, really what this is, is just an extension of behavioral driven design and a user acceptance testing at the end of the day. You could write a lot of this a given when then if you get to those levels. So these are the gates we can audit on time. This includes, but this is not limited to, these specific gates. So let me go ahead and go into a solution. There's an upstream called Plygos. What I'm going to show you today is in the Plygos upstream. This is basically I'm going to skip through this real quick to skip through a couple of this, I think about five minutes left, but what it does is one of the components called a step runner. It brings all the imperative implementations of some underlying tools and provides this abstraction that reduces, makes basically your CI tools dumb. It allows you to sort of reuse this stuff. And so as it goes through the idea of a golden path, is you build these golden paths for your organization to use. So as they can use this, as they use this, everything from commit to production is taken care of for them. They don't have to think about it, they don't have to think about how it passes compliance, it's just there, right? Think about low total cost of ownership, or they can go try to roll around and do this themselves. No matter what you're doing, you're still having to abide by these same policies and everything there. So a bit of how Plygos works here real quickly. We're basically, we have the imperative logic inside the step scan. It goes out, gets a config file and says, what's a static scan? And it's defined in there what a static scan is. It goes invokes the tools and it collects all the information. It allows us to layer in other concerns as well, because we're writing in a higher level language. We're now layering other concerns. And one of those other concerns here is how we do our implementation of collecting evidence. And so we can go through here, and then the Plygos itself can go through, collect and serialize all the outcomes. So we're writing in a higher level language how to do all this separately. But the CI tool is still just saying invoke static scan. And after that we can use something like SIG Store to then persist it and then use that as a way to store it so people can hit against that to validate signature and things like that. So let's go ahead and let's do a quick demo here. I just want to show you what this looks like. I'm not going to go too deep. I've got about three minutes left here. So let's go into this. In one of our pipelines, what we're doing here is in the pipeline where is my pipelines at? We're going through, and this is a long, big pipeline but we're running a lot of scans and we're collecting evidence at the very end of these and this is not going to be easy, but we're generating collecting evidence and figuring out whether we ought to go to the next stages or not of this. As we do this, we're actually going to go ahead and go to Nexus. We're storing all this information in Nexus. So you can store it anywhere but I'll show you what this looks like. So let me just go ahead and go over to where I've downloaded this from earlier. Nexus. In Nexus, what we can go through, what we'll see is you'll see that here is the evidence this is the governance contract that's generated. So this governance contract is generated every single run, just what you saw there. I know it's a bit of an eye chart but we store all that information as well in an area and that information looks like. This is just zipped up, stored in there. So I'll go ahead and download it and show you what it looks like. The governance contract is actually stored with this as well. So we can attest to the governance contract and everything in there. So what you'll see in here, see the unit test outputs, everything that we're collecting, so this one's Maven, we're collecting everything in here. We've now signed this artifact so all the evidence assigned, we've serialized it, that serialization assigned. So now you have sort of a full chain of custody of what's happening there and that's exactly what we're trying to go for when we do this. I know that was a really quick demo and as you go through it, would you see just invoke it, it goes to the same thing and if you guys want to follow up, I'd be happy to talk more about that. But let me go back and just finish this here. Maybe I'll finish this here. So I'm going to give you a recommendation because my screen now can't bring any other thing up. So I'm going to give you a recommendation. We'll go back to sharing this. All right. My recommendation is everybody's read the team topologies is the whole concept of platform teams. I recommend to do this well to put the human aspect into it. What you want to do is you want to change how your organization operates and you would build a platform team around governance. So take your change approval board and blow it up. But don't completely get rid of it. What they're doing is key and it's good for the organization but change how it operates. What if your change approval board became a modern governance platform team where their internal product and their internal responsibility was building these capabilities for you as an engineer to ensure you're getting to production in the compliant way where they can build golden paths for you. You can choose to use the golden path or not. That's your tradeoff to make. But this method you've seen here today you can use this. You can commit, go to production and now you've reduced the amount of risk and you reduce the time to production. So that's a bit of my own call controversial but that is a bit of my recommendation is to blow that away. So I think I'm pretty much at time now. If you have any questions let me know, have any answer. Thank you very much for your time. I don't know if I'm getting kicked out or if I can answer any questions. How do you know if you got it right? That collaboration we talked about I got it right if my security compliance folks can point to deterministically say yeah that's exactly what we expect and they can see that in the outcome. So if I got it right, if I got it right, ultimately my folks that are used to being the cogs in the middle, they become extremely comfortable with this and they become now advocates for it. That's sort of a way to do it like I can show that the technical ways you get it right but really organizationally is when my governance organization becomes an advocate of this is the only way we should do governance. That's how you know you got it right. I can't necessarily speak to Hot Red Hat does this. We are engaged like six doors so specifically around this this is something we do to help customers help the customers on this one as well. Some of the stuff like six door we are bringing into the Red Hat products over time and we're using them there. So you're starting to see a lot of things that Red Hat in the ecosystem is working on but does that answer your question? Okay so this painted a very simplified picture of governance when it comes to risk and management there are more complex decisions and complex actions to take. How does that complexity fit into the sort of the simplified view? So good question by the way because that's everybody's like we have way more so I go back to challenging when somebody says that I'm like okay what about the management aspect is complex right and so this gets to the five wise because that having a broad statement such as we have more complexity this is dig down to why what's complex about it because ultimately it will come down to some level of ambiguity. What I'm trying to drive to is declarative capabilities. How can I get to something that's declarative and that's explicit and if I can get there then I can describe this issue and a lot of it comes down to the reasons why we have these ambiguities that we can't really describe the problem we're having and that's a lot of where our human systems stop and like okay we're in some of the terms like we're spending too much time on this or this is a science project or something comes out so the question then becomes to the risk management as you're going through here and you're doing this now the idea is you've codified this but the human the governance folks as well they should be out there making more of the qualitative assessments now if you've reallocated them properly they are now focusing on the qualitative aspects of what new controls what other risks are coming up we have these new sets of controls and they need to be defined in this way how do we codify them such that we can get them into this process so a bit of where I think your question may be going this isn't the only way this is just how we codify it and make it as code and make it autonomous but there still is a human aspect that says what do we need to codify what do we need to consider and that's where this comes in so I think I'm at time I'll take some more questions later but thank you very much for that thank you