 As we've discussed, compliance is becoming a really big deal, as if it wasn't already. We can expect new cybersecurity regulations in the wake of the recent well-publicized attacks. But compliance doesn't need to be such a huge incremental burden. When you use one platform for your end-to-end DevOps, you not only gain simplicity but also visibility and greater control over the entire software supply chain. The heart of this control lies in standardizing configurations, automating access, and other automatically applied guardrails, sometimes referred to as common controls. They're needed from end-to-end, from planning your software to developing and testing it through deployment and ongoing use in production. So you can imagine I was excited to see the submission in our call for papers. It's always nice when someone breaks down a really complicated effort into an easy-to-remember mnemonic. Peter Madison is going to share his easy-to-remember taco approach for securing your software pipelines. So over to Peter. Hello and welcome to securing your pipes with a taco. I'm going to run through a quick introduction and then we'll get started. So let me start by giving you an outline of what we're going to talk about today. We're going to start off with an introduction. We're then going to talk a little bit about risk and how risk comes about and the kind of problems that we're going to talk a bit about automating governance, how we go about automating governance into our delivery pipelines. And then we're going to talk about taco and the model that I used to help a couple of organizations understand and improve upon their current delivery methods. And then we're going to wrap up with a little Q&A. So who am I? I'm Peter Madison. I'm a coach and consultant. I focus on helping organizations improve how they deliver value to customers. That's enough about me, though. Let's get into the material. So in our fast-paced world, customers really demand this instant gratification, this we have devices that we can pull up and we can pull up all the information we want to at an instant. And so we've found that we've moved over time from this need to drive down the cost of technology through investing in different technology capabilities into where we are today, where there's some gratifications driving this need to deliver the value of customers. How do we get more value to the customer's past? How do we understand whether they're seeing value in what we're delivering? And in order to do that, we need to be able to deliver faster and respond to the changing needs. And there's lots of tools that will help us accelerate our value delivery capabilities. But often times we find that when we roll these out, efforts fail more from a lack of clarity than lack of tools. They're either miscommunication, lack of understanding across silos, and lots of problems like this that get in the way of us truly getting the value out of the tools that we roll out. So to where to begin with this story, if we think about value delivery and how we accelerate it, I've been looking at this for a while, and a lot of the content around helping align security and compliance into the DevOps pipelines that I've worked on has come from some experience working with some large regulated organization banks and how to help them create a better alignment between the security teams and the delivery teams. So if we look at change, we're accelerating the rate of change. We want to be able to deliver more and more change into our environment. So we need to do this in order to accelerate that value delivery capability. We want to be able to deliver small amounts of incremental changes in order to be able to measure and understand feedback, to be able to determine how we're going to respond to the customer's needs and improve the products that we're delivering, which is easy, right? It's so easy, in fact, that every organization out there has now done this and solved all of these problems. And what we're going to do is worry about aligning target architecture, regulation, risks, security, stability, problems, all of these different areas as we introduce these changes into our organization. And human beings are very, very adaptable, but we still need to go through these change, through a change. And every time we go through any kind of change, and especially when we're introducing new paradigms or new ways of working or new ways of thinking, and we've got stops, types of training and new types of things that we need to learn, all of these things mean that we're going through change. And we typically go through this kind of cycle of ignoring not wanting to have anything to do with the change to complaining about, well, this change isn't going to work for me, to please don't do this to me, to like, well, now you've done that, the whole thing's going to fall apart, to eventually coming to some form of acceptance of that change. And different people will go through different changes in different ways or even the same change in different ways. So it's very much a consequence or experience where we are, where our mindset is, where our head's at and what side of bed we got out on that day. It's really so many of these things impact the way in which we handle and manage change. So really we look for ways of driving this change adoption curve down. We do this through different methods and through different models and understanding different perspectives, implementing different frameworks. And all of this helps us adopt change more easily because it gives us different ways of understanding what does that change mean to me? So if that's the interest, give me an idea. Rapid rates of change in the world, we know that all of that's coming at us and we know that as that rapid rate of change comes at us, it introduces more risk. It introduces risk to us individually. It introduces risk to our organizations and we need different ways of understanding that. So going back to working with the banks, we were finding that we could trade all of these pictures, we've built out a number of pipelines, we've built out capabilities, we started to look at like, how do we work together? How do we respond to customers' needs? How do we ensure that delivery teams are getting the support they need in order to be able to move forward? And that was all very well and good, but then what we found over time was that we started to hit a wall. And the wall was that the organization was very resistant to having this rate of change occur within it. It didn't want to have to go through all of these changes so rapidly and one of the big areas of resistance to this was very much in the area of GSE within governance, risk and compliance. The looking at the risk to the organization, is this something I can safely do? What happens if something goes wrong? What I really want to ensure is I don't end up on the front page of the paper. How do I make sure that that doesn't happen? If you start to make lots and lots of changes very, very quickly into all of my environment so you're going to reduce my stability, it's going to get worse, it's going to get worse, it's going to be bad, it's going to be bad, stop doing that. And so this was very much a barrier we needed to overcome. So we started by going out and talking to the very scene. And I realized as I was working with these different areas across the bank that there was a lot of things getting lost in translation. I was in a somewhat unique role in that I could go out and talk naturally to these different parts of the organization. And in two of my career, I've worked in lots of different areas. So I could understand enough to communicate with the different spaces around what their concerns were, where they was coming from, and ask questions about why they were seeing things this way. And what I was finding was a lot of things were getting lost in translation. You'd have development that was like all worried about whether they would be building the wrong things. You had testing that was all worried about whether they're going to find things wrong in the different systems and how would they communicate that back. And you've got operations who's responsible for managing outages and they don't want change coming at them which is going to cause even more outages. And you've got security that was worried about whether or not they were preventing the things and showing value to the organization. You've got compliance, they're worrying about what things are wrong. And then you've got architecture that's worried that everybody's just building all the wrong things anyway. And then leadership is largely just trying to all of you people are making so much noise. Look, could you please stop? And they really were trying to stop any what they saw as any kind of unnecessary change in the organization. So going through a set of conversations I started to sit down especially with security compliance and audit and start to understand so why are these concerns coming? Where are these concerns coming from? If we look at this from a DevOps implementation and we know we're introducing much more change but we're introducing small incremental changes which from our perspective are actually going to be safer because it's not going to be so disruptive. And we understand of course we have no way of necessarily knowing in these complex systems we work with in that when we make a change how disruptive it'll be. But we know that if we make smaller changes then the chance of impact is generally going to be smaller. And if we put the right mindset in place then maybe there's ways we can handle this. And of course this was a lot for them to take off. So with all of that in mind and having spoken to these different areas got an understanding of the sort of things that were concerning them and why they might be worried about this. We started to look at well how can we go about automating the governance practices. So the first part of this is what actually happens when there is a risk in the organization? How does somebody within the different areas determine what to do about it? And the typical way of behaving here is well if I think something doesn't look right I'm going to maybe pick up the phone and call my compliance area and say what is it I need to do? And we'll get somebody at random and we'll say well here's a standard document. It's 300 pages, you're on page 189 and look up section 3.4.3 and see how you possibly apply that to your context and see how it's going to be relevant. And a lot of things are kind of broken with this process. One part of it is that the person you talk to is really the same person. They don't really understand the problem that you're trying to solve. This is especially hard and fast moving environments like cloud where they documentation may not necessarily even be correct for the types of changes that you're going to have to do and what that's going to look like. So how do I tell whether this is even relevant to me? Does this actually make us any safer? Do we actually feel like we're getting safer as a consequence of making that phone call? Have I had my questions answered and am I actually safe to move forward? And the problem with this is it drives things down to a default or if I can't understand it, if I can't get the answer I'm looking for, then my default ends up being no and that is very often not going to be the right thing from a moving things forward and learning perspective. So let's take this a step back to think about, well, if my audit and compliance and my security teams are concerned about all of this speed, what can we do in order to help them with understanding what's happening? How can we communicate with them better to show them what's occurring when I actually push code out of the speed? And what we find is if we take this and this is taken from the IT revolution of white paper, it's a nice way of breaking down and considering what are the main areas we might want to consider across a pipeline where we can put in controls and put in protections to help us understand how this works. And one of the first things that we consider is that security is quite often not involved until we get to this check. We're going to validate that you're doing things correctly when it gets to production. We're going to run all of our pen tests here. We're going to try and break into that system. We've got a fairly good understanding of that. But the real issue here is that we actually need to start to introduce security much earlier on. We need security checks all through our entire pipeline. We need to be looking at this all the way from when we check in code. How do we check in code? How is the code to be and how are we educating our developers in secure coding practices? Are we looking at different ways of helping focus on and ensure that the code is secure before it even makes it into the build process? And especially when we start to look at things like supply chain vulnerability, are we considering our dependency management? How are we doing that? How are we looking at ensuring that libraries are coming from known safe sources? And what sort of things are we putting into place to make sure that that happens? And so when we get to actually running the pipe, we also now need to think that if we have all of these checks picked in, we're also going to start to work in a way where as a delivery team, we can define the work. We can check it in as code. We can build it. We can then run all of these tests on it and then get the build results. That gives us a point where we can validate the other things kind of looking like it's going to work. And once we're comfortable with that, we can generate the artifacts that will move through the rest of the process. And so this was how we were setting things up to ensure that as we moved through the system, we were separating out organizational tests and concerns so that we could ensure that those were always going to get run for a given test. So understanding the risk of the thing that you're moving through the system, like how much attention do I need to pay to this? And of course, this is often a difficult thing to do. We have to make judgment calls based on what we understand at a certain sense we're in, which is a gain when we need to have better context as to what things should I be using to measure my risk. So as we, one of the other key pieces here is that the artifact that we generate should always be the artifact we deploy into all subsequent environments. We don't want to be rebuilding artifacts into different environments. And one of the important things to note here too is that as we move forward, we can start to trust our automation more. And we don't want to necessarily start from the point where we fully trust everything we've done until we've understood and learned, like what exactly is it that we've made sure we're tracking here. One of the other great benefits of all of this is that once we've started to automate these controls and these pieces into the pipeline, we've now got a vast source of information that we can use to pull information back and radiate so we can see what's occurring in our pipelines. This gives us information we can provide out to others. And then when we start to automate this, we can create information to give to the auditors and give audit what they need in order to be able to understand what's going on. I've in the past done this across all the systems to say if we work out what are the automatable controls that we have, if I understand all the status of every system in my environment, I can pull back this data, I can radiate it, and then I can give the auditors a report on a daily basis to say this is the status and compliance of every machine in my environment for the last 365 days. And so this idea of continuous compliance to go even faster would be able to understand at the moment that we make this change, are we still maintaining compliance by making this change is where this will help us get safer and help inform us and give us the information we need in order to make sure we're making the right decisions about risk as we move forward. So when we talk about automating governance and automating governance into our delivery pipelines, it sounds like a lot of work to do these pieces, but there's some basic rules we should really be considering. And one is it's not about a checklist, it's about collaboratively creating safety. It's not a bust about how do we keep audit of our back. It's about how do we collaborate and work with the compliance team, the security team, and the auditors to understand, hey, how are we helping understand what things make our environment safer and what sort of things can we do to radiate that information to ensure that we're exposing the right things, that we're looking at the right problems, that we're going in the right direction. We also want to make sure that we're not boiling the ocean. You can start small, get one team working, and grow from there. Ensure that we focus on the conversation, engage leaders, focus on the conversation, not on the tooling. It really is about making sure that people are on the same page around how these pipelines are. So we've gone through the rapid rate of change, how value delivery and the need to be able to deliver small incremental changes and learn from that as really changing the way we do software delivery to the understanding how the concerns around risk and how that we want to be able to understand risk so we can properly expose it, and how we can go about automating that into our pipelines and what that then enables us to do. Now let's talk a little bit about TACO and where that came from and how that helps with this. So one of the things that we find when we start to automate these pieces into the pipelines is that what we end up with is this idea of a paid vote. This is a way to do this that has a lot of these controls all baked into it already. So if you come and use this standardized pipeline, we've already built all of these into it. This isn't going to work for everybody. You're going to have parts of your organization that are going to be moving faster than any centralized team can possibly do. So you need to make sure that this is an extensible platform, one that they can contribute to and that it's continually evolving. And you want to make sure you're not holding people back. So you need a way of also being able to say to them that it's fine, you want to go faster than we can build your way of doing this. The pipelines definitely have all of the carrots built in. We want to help you as much as we can. But if you can't wait for us to get everything built out and you don't have time to work with us to make that happen, then these are the sets of things that you need to make sure happen on any kind of pipeline that you deploy. And then you have a path forward so that we're not holding you back. And one of the important reasons for this is that roadmaps are not static. Roadmaps need to continually evolve. We need to be reevaluating them all the time. We need to consider that you might, as well, want to go a direction or a path longer just because there's an opportunity to learn along the way. Maybe we want to try something out and we want to try it out, but we need an experiment so that we can have a better understanding of where we might be going and what sort of direction that we're going in. So if we start to look at how we might model this to be able to help us, this idea that we need a way now of communicating, remember the Reservist on the way of communicating across these different areas, that yes, indeed, when we're building out these pipelines, we are going to do the right thing. We are going to make sure that we're taking care of all of our concerns and that everybody can now understand and come together and agree that this is what a secure pipeline should look like. To help with that, we built out the TACO model, which came down to traceability, identifying what happens in the pipe, came down to access, securing the delivery process, came down to compliance, which was validating the payload in the pipeline, and it came down to operations, which was how is it running and how we validated target. And then on top of that, we laid down a set of controls to ensure we could validate the access. This gave an easy monomic for teams to be able to remember, have I taken care of TACO? Have I looked at all of the different things I need to concern myself with as I build out pipelines? This in turn, my daughter drew this lovely picture of this to help explain it to. And we took this and we presented it back in the first version of this in a very simplistic way. Starting from the purpose, what is the control we need, what is the artifact that we'll generate to track that, whereas it's stored, what happens when the control is passed, what happens when it's failed and who's going to own it. And we use this for each of the different controls we have defined on the TACO. And that in turn allowed us to then visualize and give teams an idea of like, well, how much of these things have you successfully built into your processes and people could come and they could self assess and put that into the spreadsheet and it would help guide them through, well, what is it that I need to do in my context in order to be successful? Capital One has a similar sort of process and the links to some of these are there on in that they have a set of these 16 things that you need to ensure every pipeline has in order to be able to push through into production. And there's, so that's all about pipelines and how we execute pipeline, but we've also got to think that there's more than that in our risk portfolio. And what happens when delivery teams are delivering, no matter how many controls or tools or process pieces that we put into place, that's never going to capture everything we need to. The most important thing that we need to create within our organization to make it safer is a culture of psychological safety. We need to be able to have a culture where it is normal and expected and accepted that people can speak up if they run into something that is a risk that they can they can raise their hands and say, hey, this just doesn't look right. We need to do something about this. If we if we don't have this, this culture and messengers are shot. And the what happens is that very quickly, people will sweep things under the rug and they will cover things up. And which is largely what happens in a lot of auditing certain circumstances where the audit rule come in and it's a scramble to once a year, make everything look pretty enough that we've satisfied the audit requirements rather than continually looking at and assessing and thinking about are we doing the right things? Are we being safe? Are we handling a managing risk correctly in the way that we do our delivery? One come here does an awesome job of this and I encourage people look up their videos and their material online is nationwide. They follow a lot of the principles and practices that talked about in John Smart's BVSSH group, the Sooner, Safer, Happier, and this idea of continual safety teams aligned to the value streams and generating tickets that are put into the backlog to understand control events, as well as having a repository of context specific stories and events, which are written in a way that delivery teams can easily understand and access them to be able to understand what it is they need to do when they encounter something they believe might be a risk. And of course, the other advantage of this is they can then pull this out and they can measure it and they can use those measurements to see progress as to what's going well. Are we focusing on the right area? Are we helping teams understand? Are we helping identify and solve problems? So it's not really about dev ops and it's not really about dev sec ops in this respect. It's really, and this is gain boring from John Smart, it's about the risk dev, risk ops risk. It's about this idea that the management of risk needs to be embedded into our entire delivery value stream. We need to think about it from the beginning to the end and how it impacts our decisions. So we've gone through this idea that value delivery is accelerating the way that we work and the amount of change into our environments. We've talked about risk, talked about how we can go about automating governance and then how that automation of governance also then needs to be shared out across the organization. And if we use an economic like TACO to help people remember and understand what it is that they need to do and then we can break down the different ways of working with it. So to wrap this up, we can't solve problems using the same way that we've done it before. Securing our delivery pipelines requires us to think about how is it that we're working? How are we working together? What is safety? How are we going to make all of this work for us in our context in our environment? Here's a number of really, really good resources that I highly recommend people looking at and meeting through to understand some of the wonderful work that's going on in this space. And now let's quickly review what things we've talked about in terms of we've talked about a way to create a common understanding of what a good pipeline is. If you follow through TACO then you've got either as an organization where you set up these paid roads, these standardized pipelines that are abide by the automated governance principles but we've also created an understanding of what is a good pipeline? What is a working pipeline that has all of the things in it baked into it that we all agree as an organization is going to be sufficient for us to push value through to our customers? Safety is about behavior, not just the tools. It's about having the right conversations in between the different groups and understanding what that means. And it's also about looking at different ways to automate the software delivery compliance and we've talked about some of those as well. And so with that, I will leave you with that. There's a very short survey at the end. And thank you very much.