 All right, so I think I think it's 315 we'll get started. Hi everyone. My name is Michael Edenson Co-founder and CEO fianna labs. I know Some of you probably came here to see Anders Vega. He unfortunately he was unable to make it at the last minute So so I'll be sharing some of his thoughts as well Andres is with control plane, which is a professional services firm that's doing a lot of solutioning around automated governance And if you know we would build products to solution automated governance as well So Andres and I have been working pretty closely over the last two years Developing some solutions in this area and we wanted to share with you Some of the things that we found some of the things that we're working on and some of the lessons learned So my agenda for today I'm gonna give a little bit of background and context what we're talking about with automated governance a few definitions We're gonna talk about attestations then open source tools example architecture and demo some art of the possible before Talking about where to go from here So for a little bit of background a lot of this conversation around automated governance began with a white paper that was written in 2019 Which was the DevOps automated governance reference architecture that was developed by a group of individuals as part of the it revolution think tank From the banking industry heavily heavily represented and that architecture was taken and put into practice as some of the nation's largest banks The result of those implementations led to the book automated sorry investments unlimited Which details fictional banks journey toward automated governance and how to how to resolve some of the issues that can arise like MRI a's And the nature of this is that building software is complex We all know that and at a regulated institution that's amplified You have a bunch of different run times a bunch of different build platforms tool chains Release environments and being able to capture all the requisite metadata Sufficient for for a software change can be can be pretty tricky And so when we talk about automated governance in this presentation, I'm going to talk about this specific part the path to release And for those of you that are that are at financial institutions This may look familiar to you in everything from creating a user story to writing the code to executing your tool chain And then creating your change requests doing your validation C tasks after And then providing the traceability of all of those events when the auditors come calling So what is automated governance and the definition that we'll be working off of today is that automated governance is the machine orchestrated capture and verification of SDLC event metadata immutable storage of evidence and automation An automation of the authorization of the release based on predefined policy definitions So we talked about predefined policy talking about transparent policy that's evident Evident to the developer. They can see what's required of them and what they need to do to meet it It's contractual, which means that it's been agreed upon that if you meet these certain policy requirements Then you can go release the software It's version controlled so its history can be can be shown retroactively And then you get bonus points if you store it in source code depends on how you want to implement it The second piece is telemetry So the machine led capture of all the event metadata around your SDLC meaning that there's no human reporting You have a software that's able to capture and independently verify the authenticity of events happening And the definition of an event not occurring is that the machine couldn't capture it. We can talk more about that in a little bit Also having clear definitions of the system of record and resource URIs So although your automated governance software may not be the system of record for a lot of the evidence it has Resource URIs that can point to them at the time of audit And then the ability to reproduce that that chain of events at any given time to show the results The last piece is the tamper evident data stores. So digitally signing your attestation so they can immutably be stored in a database and that come time of deployment when it's time to Access these attestations if the signatures don't match, you know that it's untrusted and that that protects against tampering with the evidence and Then the last piece is the automated enforcement, which is taking the subjective Portion of authorization at the time of release and making that completely objective So a lot of what we're going to be talking about today were revolves around attestations And so I thought I'd dive in a little bit as to the anatomy of the attestations that we're talking about So the the attestation declares what event occurred identifies the asset in question at time stamps with context, so you know what happened in when so that could be that an event, you know that a Code scan began a code scan completed all of those different types of context around the event And then describing the conditions surrounding the event. So how did that happen? How did that come to be? What were the environment variables when that that event occurred? Details of the output of the event. So if you're running a SAS scan, it would be it would be the vulnerability report Comparing the results to the policy. So determining pass or fail or whatever the compliance state is and then enough information to Reproduce that event. So all of those different pieces being stored in each attestation It is sort of the threshold that we're talking about for automated governance in order to to get to that point of confidence Where you can make these automated decisions And so there are a lot of open source tools that can help you in this process and some of them What we'll show what you can make with them in a little bit But but some of the most important ones are sick store Which is a digital signing tool and we'll show you a little bit how we're using it Also in Tota and salsa the ability to prove the authenticity of the artifacts that you're capturing As well as the the environment that they were captured in and then open policy agent for being able to write your rule sets To determine what the definition of passing and failing are So in an example architecture throughout the build and release process You run all your different pipeline steps in your build pipeline and you have your dev release pipeline where you you deploy the code And you do some other types of testing and then finally your prod release pipeline all of the raw event data being captured throughout that process being stored with context Enrichment so if you don't have all the information that you need from the raw event data having the capability to go back and enrich it It's very important Then determining pass or fail from the policy engine And then producing a signature and storing that in the immutable ledger before storing the attestation in its own database So with an attestation API using this framework for this architecture you can build some cool stuff And that's really the next piece of this which is continuous feedback So automated governance has a lot of really granular data and that's sort of by you know by design Right, we're trying to to take humans out of the governance of software development And so naturally we're gonna have to provide a lot of evidence to suffice that and so how do we make that and design that in a Way that's user-friendly that encourages developers to want to participate in this automated governance process And as we say make the right thing the easy thing so We're kind of building up here to what to what we're going to show but an attestation example would be a combination of a few pieces so I'm left-hand side you see the control and we're going to use an example to say unit test coverage, right? And then on the right-hand side is the result so for for this attestation this code base has 91% code coverage with unit tests So the the policy piece is that predefined policy that we that we agreed upon this as you need to have 80% for this example Then the raw event data which captures the results of the unit tests being executed in the pipeline as well as all of the context around that The rule that's written in opens policy agent That says if your your coverage is greater than or equal to the minimum and then finally the digital signature of the result of all Those pieces so that when we come time to to deploy we can then look and say all right this matches the signature We can trust that this data is authentic and it's of the highest integrity And then that data can also be produced for auditors at the end So our approach at fianna was is to this is a simple shared language So taking all of the governance and all of the compliance and breaking it down to two five pieces. So that's pass warn fail in progress and not found And all these are pretty straightforward except the one I want to talk You know a little bit about is the not founds a state and that's really important in your automated governance journey So as you as you build these controls out and you provide this feedback to your developers Not founds going to be very useful to you because a lot of times and I've seen this in practice Some of the bigger challenges for automated governance or the developers are doing things. They're doing Code scans. They're doing testing, but they're not doing it in a way that's traceable And so being able to show them that that this is not found that we don't have any evidence of it But it is required is important because it'll help them Triage some of those issues of configuration, right? But adopting that posture of saying It's not it's it's considered not found if the machine can't observe it independently It's a really important step to to getting everything in order And so the demo that I want to show you real quick is the art of the possible So if you know we've been working on as I mentioned on a product for for automated governance And we want to show you what we're working on hopefully to give you ideas of what you can be working on With some of these pieces that are out there in the open source community. So with all of these tools, what can we build? And so here you we have a visualization of one service, right? So one code repository in an organization and you can see all of the past Versions of that code all the past commits The environments that this version of the code is currently running in as well as all of the controls and its current state of compliance So for this example this repository on this commit produce two artifacts two Docker containers user interface and an API Which means there are two attestations for container scan here And we can click into each one of them and see okay the API is failing container scan So we can go and look at the chain of events that led to that We can look at the pipeline conditions that that were set at the time that the container scan was kicked off This tells us that the container scan was provisioned from an authorized build server and not someone's local machine Then we have the enrichment callback that identifies the assets in question And normalizes them on the repository identifier And then finally the attestation that produces pass or fail and for each of these we have the Recourse six-door recor ledger entry that we can compare those signatures and improve that this data is authentic So for a developer they can go in here and say okay here are my vulnerabilities that I need to remediate And then it takes them to to the system of record So that they can they can do so and then it's all event driven So as a developer builds their code they rebuild their code this updates in real time And this is just you know a granular view of how a developer would interact on a day-to-day basis But I think you can all probably extrapolate that you know this kind of data being captured on every single commit of the code base For all of the different control sets in your organization Over time provides some some pretty big insights as to how how your applications are performing So the question is where to go from here So assembling these open-source tools and using these capabilities to build something that can provide traceability and to end from from code to production You know I'm gonna tell you a few stories from the trenches in my past role I actually did a lot of work in this area at one of the one of the top ten largest banks in the country So some of these stories are from there and some of them are from from others that I've talked to in the time Sense, but a lot of it really revolves around now We have all these details and what does that mean and so what you do with all these details is kind of Depending on your organization, but some of the things that we've observed to have worked and not worked there are the following And the following is not to just dump the the the details over to your regulators, right? Obviously, they can be taken out of context. These are a lot of it has a lot of granular information and it can be misused if it's not fully understood We know of an organization at one time that had a policy that said you must do unit tests And so the risk department was looking through and saw that One build didn't have unit tests because the developer commented it out and they were actually prepared to terminate that developer for Turning off the unit test. So that they didn't fortunately, but it tells you that you know This this information is pretty powerful and sometimes if it's not fully understood it can be used to create a culture That's not desirable And so the alternative is to then abstract that data and create somewhat of a scorecard, right? So, you know, you've really good you're passing everything a bcd all the way to f And there are challenges to that as well Because one of the things that you want to achieve with automated governance is the federation of responsibility of compliance back to the developers You want the developers to own their own compliance? And so we found in practice that if you abstract that evidence too much to the point where it becomes a simple ABCD representation of their compliance, then it tends to go back to The administrators who are then top-down trying to obtain compliance and that doesn't seem to be very effective in practice Well, we found to be most effective in practice is automated enforcement And so what that what that means here in this example is so you you write your code You build you test you deploy to dev and if at any point in time So you know saying for example during your during your code like if you open a feature branch Well, we'll preview to you your current compliance right then during build and test will alert you again to your your given compliance Then during your dev deploy and your promotion of the artifacts will warn you and say hey Not compliant on these need to remediate And then if you still go and attempt to deploy to deploy to production that deployment will be blocked And so being able to do this in a fully automated fashion without any human intervention I can tell you that when put into practice, it doesn't make everyone very happy but I've seen that it has been very effective And when rolling this out in an implementation Beginning with the easy controls moving to the harder controls over time The biggest challenge that you'll see is that it's a it requires a huge change in behavior for developers They're having to use new muscles that they haven't before And that is to really start to think about Compliance is every piece of their software development, right? But but also not Suffocating them to the point where you're breaking all of their CI builds if they're not compliant, right? You want developers to have that freedom to be able to build and test And get things up there plus and you know and in a break the glass emergency You don't want to to be blocking them at the point where they need to be writing those new features and getting them out So this is just an example model of how something like this can be rolled out So Breaking it down when we talk about this enforcement and some of the reasons why it works Actually game theory helps to explain it and that is that the optimal situation for a developer Is that they can develop features in the Wild Wild West and the optimal feature for governance? Sorry the optimal environment for governance risk and compliance is that they have that 12-week evidence review before the change, right? Then the not optimal situation is vice versa, right? So developers the the 12-week review is not optimal and then for the For the risk and compliance group the Wild Wild West is obviously that the least optimal situation so the subjective change process is that the way that things are currently being done and that's optimal for Both parties, but it's not really achieving the objective of being able to to do continuous release and so The not optimal situation for both groups is automated governance And that's that automated enforcement of all these policies, right? The developers don't love it because it is a pretty hard and fast Requirement if you're not compliant your code won't deploy And then for the governance and risk and compliance They don't get to have their hands on this They have to pre agree ahead of time to all of the different evidence thresholds But this does provide the organization with with the capability to to to improve the integrity of their compliance and deliver features faster So some opportunities that this provides The ability to do different control templates, right? So being able to say based on the product type or the risk posture if it's internet facing If it's higher risk if it affects the movement of funds, right? You have different sets of control requirements that can be applied And then you can you can even provide more context and say for a specific type of release There's another set of controls that need to be applied. So you could say for example, if you're only updating the base image You may not need to do accessibility testing Whereas if you are doing a change to business logic on the front end Then maybe you do, right? So you can provide these different types of releases in different contexts For each of your applications to say these are the controls that need to be met. Here's what you can do For this type of change It also gives the the executive is the ability for knobs and dials So being able to turn up certain values in certain places So you say I want, you know, maybe 85 test coverage here on all new code, right? And then legacy code maybe 55 percent code coverage But then also being able to say all right, you know for a period of time if you if you have a legacy code base You have, you know, six months to remediate your higher critical vulnerabilities at which point automatically now you'll be blocked For any new new higher critical vulnerabilities. So it gives you that that granular level of control And it also allows you to shape developer behavior. So in one example in past practice, we've observed A large bank that wanted to upgrade their cd pipeline libraries, right? And so in the past that typically would have been a process that would require a project manager a budget And probably a year's worth of time to go in and help each development team migrate over to the new versions of the libraries But with this automated enforcement, they they were able to start marking builds as unstable They gave, you know, four or five months notice to say all right at this point in time You won't be able to deploy if you're using those pipeline libraries But they were able to provide the visibility along the way as the developers are being warned About this and they were able to migrate over themselves. So as we said, you know back to federating the responsibility of compliance back to the developers It also gives you a path toward fully automated releases. So When you get to the point where you can be fully compliant with all the policies and procedures You should be able to release without any human intervention. That means No change boards. No manual risk review being able to go straight to production So that's really what what What the end goal is here for automated governance and the outcomes being that your release cycle times go down Compliance goes up and most importantly developer happiness and productivity increases as well So that's what I have You know, as I mentioned before Andris is is with control plane doing a lot of Consulting on this area and providing solutions and then we at fianna were providing products in this space So if any of this, you know, you found interesting or applicable to your company, please reach out to us and we'd be happy to talk Thank you So I think I think we have some time But i'm happy to answer some questions if anyone has them. Yes Yes Right, so it's actually taking in this example It's actually taking the byte array signing it and then storing it in the database so that when you pull that that the data from the database and you Really when you marshal it back into a byte array Compare it against the signature, then you'll determine if it's if it's matching Yeah That's yeah, that's the sig store tool. It's an open source component Signing tool that that you can use I believe it was developed by by a lot of the folks at chain guard Chain guard, so Any other questions? All right. Well, thank you guys appreciate it