 Thanks everyone for joining us. We're joined here by Peter Madison. Thank you so much for joining us Peter, for those of you who don't know Peter, he's all the way across the world and he's coming to his life from Toronto. Thank you so much for joining us today. Peter is the CEO of Zodiac and he's here to talk to us about securing your price of the taco. Thank you for the introduction. So I'm Peter Madison and this talk is one that I've given a few times at various conferences, both agile DevOps and a security conference too. And I'm going to start by running through a little bit about me. So I'm a coach consultant and a founder Zodiac is a company that we set up. We go in and help organizations through their transformation processes. My background is something like 20 odd years in infrastructure and operations. And I primarily focus on delivery, helping organizations work out how to get better and more efficient at delivering code and working with their customers. I've done a lot of work in building high performance teams and a variety of different things from rolling out global identity systems to moving data centers in my past life as an infrastructure and operations guy. So I like to start these things with a quote and this is one of my favorites around from Socrates or I can't teach anybody anything I can only make them think. This really speaks to the what we're looking to do here, which is to create an environment where people are looking to the question looking to learn and looking to understand what's happening in the processes so that they can start to improve them, rather than just having somebody come in and try and tell them exactly this is how you should do it. So the agenda today are going to run through the problems that we ran into and the problem we were looking to try and solve and how we ended up going about that. And what I'm going to tell today is a story, which is particularly related to one engagement, which led on into other engagements where we used some similar strategies. And then we'll run through some pieces around how you can take this model and make it work for you and apply it to potentially problems that you have and the kind of things you need to look for and what we sort of learned along the way. So when we talk about the problem here, the main problem that this talk is about is this organizational dynamics this difficulty of taking a new concept bringing it in and overcoming the organizational resistance. And so it started when I was engaged in one of my first large consulting engagements and this bank invited me and they've been on that journey for about two years. They had this DevOps group as part of a transformation org and it all been set up and lots of the big companies have come in and organized all of this. And they were sitting there and they were going, well, I'm not sure this is going exactly like we want. We'd like to really sort of understand a little bit more about whether we're going the right direction. So I sat down with them and said, yeah, well, how can I help? And they said, well, could you come in and look at what we've done and tell us if we're on the right path and whether our strategies are going to work and going to get us to what we're looking to do. I said, sure, yeah, if you pay me, I'll come in and I'll help you. And so I came in and initial contract was for two months. And we started off and at the end of those two months, I took a look at what they've been doing and the kind of things they've been putting into place and the structures that they created and the frameworks that were created. And so the way they were engaging with the development teams and so that they weren't getting the kind of uptake for the things they were trying to do as well as I mean it's an anti pattern itself. It's their ops group, although they were well meaning and they were well set up and they definitely had some good outcomes they were looking to generate, but they weren't really driving the kind of behaviors they wanted into the organization. So at the end of the two months, I said, yeah, no, you're not doing it right. There's lots of other things you could be doing and so let's have a talk about what some of those things might look like. And they didn't kick me out on the street. They said, thank you for telling us that we could do better. And they kept me around for a little bit longer. In fact, a couple of years longer. So we understand that when we're going in and we're trying to do modern software delivery, we need fast incremental delivery of value. And this requires that we're making lots and lots of changes into those target environments. And when we're coming to the large organization like a bank, all of the legacy, the technical debt, the complexity and the environment and I often describe that as complexity debt of the environment is very high and we've got a lot of problems to overcome. Now, when you start to introduce the changes, you run into this problem that the organization is very risk averse. They don't want to make changes. They will do that utmost to avoid it. But at least this organization grasps on the idea they needed to make this change. They wanted to adopt DevOps and they'd set up this organization inside them to help be the proponents of these ideas. And so they have this concept of DevOps and essentially this is the way I like to describe it around people, process and tools working together to deliver rapid and continuous delivery of value to customers. And we came to this common understanding. So one part of the problem that they were seeing is that they weren't all on the same page as to what they were doing there. The group itself had been pulled from various different parts of the organization and many of them were still in the mindset that was operating from a different space. And so one of the first things that I did was to help them come to agreement as to what their definition of DevOps would be. And we came up with this definition and used this as kind of an anchor point for how to have the conversation. Next I went through and I helped them come up with, well, what do you mean by DevOps? I said, well, starting from customers, the various channels, building out the delivery teams, product owners, taking stories and breaking them down into different pieces. And then taking these and the team members can now take these stories that can break them up and we can start to execute them on whatever practices they need to do. And we turn everything into code. Everything has to be code. It all has to be sample. We then start to build out automated pipelines. We build, we test, we deploy, we create products, we build out our feedback loops, and we start to learn from our customers. And we then start to measure and use these metrics to guide what this process looks like. So I built this outfall and to help them with this sort of generalized understanding and have a high level of like what we're trying to do, what we're trying to achieve. And because visualizations always seem to help with these kinds of ideas. And so, well, this is easy, right? Got it. That's like we can do this like across the board. And then a funny story from an earlier part of the company I was working with actually gave out these easy buttons to the whole department. And this whole department had these buttons and when you press the button, it said, that was easy. Very, very quickly the batteries from all of these devices disappeared because, well, you know, very, very distracting to suddenly hear these, that was easy. So we realized, well, it wasn't going to be this easy. And I was helping them only looking at strategy and they'd been at this a couple of years. So I started by saying, well, the way the methods I've done before in other places to roll this out, there was the enthusiasm piece that we need to overcome. So I started by drawing a bunch of pictures for them. And I created some design artifacts for like how can we build out and create ways of automated pipelines to overcome common problems? How can we show those what we're trying to do? How can we radiate that out to the organization? How do we organize this group that is trying to propagate change into the organization so that they are providing services that will help the organization change? And we also they set up accelerator programs. So this concept of onboarding people to help them understand patterns and helping them learn some of these pieces so that we could find champions and start to roll them out. And a lot of those things were they were effective, but they weren't necessarily generating all of the results that we were hoping that they would. So we found we were hitting a wall and that the organization was very, very large and it wasn't very easy to change. And there was lots and lots of resistance from lots and lots of different areas. And when we say, well, okay, look, we've got a better way we can we can automate all these pieces and we've created this pipeline. This pipeline we're creating is just fantastic. You should come do things our way. And that's kind of the pro that's where we're hitting this wall because one of the problems we were having is that we were getting lost in translation. And this is the Rosetta Stone. And this may be lost in translation too, but the Rosetta Stone was a stone that was found in, I believe it's found in Egypt and it's in the British Museum now. And it's a stone which has got language written in three different languages. And it was critical to allow us to learn how to translate from ancient Latin into Greek and into hieroglyphs. Now, and it's not a very interesting tablet. It's basically a shopping list or it's a list of trading goods or something along those lines that it allowed scholars to be able to work out how to translate between different languages. And that was the problem that I saw that we were running into. We had the developers who they had their view of what things should look like and they didn't want to come and use our pipeline. They wanted to be able to do things themselves. They wanted to be able to discover how to do these pieces and having to come to us to ask changes was way too slow. And that was not working at all for them. The testing teams, they were off in their own world and though we had some pieces trying to help automate testing, we had to kind of bring them to us. And there were some horrible politics around. There was a VP of QA versus the DevOps test automation and that VP of QA just did not see eye to eye with what we were looking to do. Operations, they wanted to help but the leadership there was very, very much, hey, we don't want any kind of changes going in. Our environments are highly complex and fragile. Don't come and touch us. Don't talk to us. We don't want to help you. That was one area which having myself previously done some work with this particular organization, I had some advantages where I could help because I knew some of the folks in operations and I managed to break down some of those barriers. Security, the security was like, hey, what are you doing? Why are you making changes? Why are you doing this so fast? They were not happy with it. No, no, no, slow down. You've got to make sure that you've got all these pieces as was compliance. Compliance is like, well, I don't know what you're doing and I don't understand how that impacts the risk of the organization. So how can I ensure that the right rules are being abided by and that we're doing the right things? An architecture, which in this large organization was a body into itself was very much asking, well, how do I know that if you're going to allow these groups to make rapid changes into these different areas that they're going to do them in line with the changes that we need to do? With our architectures, our vision for where things should look. And this organization has some very particular architectural holdbacks architecture from an enterprise level owned these product codes and these product codes dictated who could do what. It had this interesting effect of it was a categorization of applications. But in order for you to do anything and to be given money to be funded, you needed to have a product code and getting a product code meant you had to go beg architecture for one. But the problem was that, well, this product code didn't fit this idea of, hey, if we're going to roll out microservices, does every microservice get its individual product code? Or do we, because that wouldn't make sense if we got a thousand and they're going up and down, we can't have product codes being generated on the fly. That doesn't work. So how do we fund these things? So the architectural problems extended into finance and how do we then start to organize this? All of these different areas were all talking different languages. They all had different requirements. They all wanted to approach this problem. And here are we turning up and saying, actually, we don't want to do any of these things that way. What we want to do is we want to automate all of it and have all of you come together and work together to create the ability for us to start to deliver value faster customers. So what now? Well, we have this problem. How can we overcome this technological bet? How can we overcome this human debt, all the politics that are going in? And this general complexity, the complexity of the organization was getting in the way of then being able to adopt the practices that we were putting forward. So what I did next was there's got to be a way of trying to get all of these different areas on the same page. If we come in and I present things in a normal DevOps manner, they've read the books, they've seen this. They're not going to, they don't relate to me putting those diagrams in front of them or just telling them to go read a book. What I need is I need for them to come to the table and see what I can do and present things in a way that they are going to understand. So I wanted to create something that was going to be easy for them to understand. It could be presented in a way that groups like security compliance order would just get it and then could be co-created with them so that they had input, so they were engaged and they owned the solution that we were going to put together. So through these conversations, I went and I started to meet with security and architecture in these different areas and make some friends and hire up the latter I could get to be able to find out what people were thinking and where some of these concerns were coming from. And I started to pull out some stories of how what we were creating wasn't meeting their needs. And stories like consistency is king where the rebuild of an artifact automatically then have been deployed into production through the automation. But because the automation had failed, the deployment team had come in and manually deployed out onto the target servers. And this unfortunately had then resulted in a further outage. And because of the manual deploy, we had no real concept at that time as to what was the actual state of the target systems. Because the manual deploy had tracked down the person who had done those deploys and we had to find out what exactly had you done and then backtracing those. And we didn't really have good enough traceability of what artifact was going to which system and how that was being deployed, which was a great learning opportunity. So I was listening to these and I was like, okay, well, since we know that and I've run into some of these problems in the past before. So we said, well, what can we do to help them solve this problem? So we said, well, and at this time, and there are other ways I would potentially do this now. I said, well, we can we're rolling out to Jera for the delivery teams and this we can start to create hooks back into that so that they can see what happened. And this satisfies that need for the custody chain of custody that the compliance guys are asking for. And we need to have the conversation as to why they think they need to use but there and we can start to version the pipeline components because our pipelines are code. We've we've codified the actual pipelines themselves they exist with the in the repositories we have. So let's make sure that we've versioned all of those pieces. And finally, we can sort of we can automate back into the into the change system we can start to automate those pieces. And so we did these parts and people liked it and we were moving forward and they thought that was pretty good. And I thought, that's kind of interesting. I mean, all of those pieces have something to do with this idea of traceability, this idea that we can see what happened. We need to be able to audit what's happening in the pipeline audit. That sounds like something that people are over there interested in. Let's see what we can do for them there. So I'll take these ideas and I'll generalize them and I'll call them traceability. You can see where this is going, right? So the. So I carried on with these conversations speaking to some other parts of the organization pulling out some other stories. And we found that one of the biggest complaints that we were getting from the development teams about coming to use our pipelines was around getting access to credentials. Because getting access to create a new Active Directory ID or to create new secrets was a case of submitting a ticket into operations. And operations would then when they're on good time, eventually get around to providing those credentials and allowing them to be able to set this up. Obviously, this was far from ideal and we need to sign to fix this. So we started working with infrastructure engineering. And we set up some and we set up some API. We decided to develop some APIs and we use these APIs to expand and create the ability to automate those creation. We also set up some POCs and started to investigate solutions like HedgeClock Vault to manage those secrets. And because compliance was asking like, well, what if people go and store these secrets into the repositories? We also set up some automated scanning to look for those secrets within those repositories. And so, okay, this is good. We're starting to create a lot of things which are all related to the access within the pipeline. Who can do what within the pipeline? How does this work? And we started to overcome some organizational complexity and problems that they were running into. Okay, well, I'm going to group these things together and I'm going to call them access. And access is all of the things that we're doing around screening access to the pipeline and the code that's going through it. So next, we started running into some issues. There was a cloud transformation going on too, which was probably their fourth attempt at trying to top them all out of the cloud. And that is another presentation onto itself. So DevOps compliance was asked to submit, well, what are you doing as a DevOps team to ensure that anybody who is doing DevOps in the organization is compliant with what we need to put into the cloud? And we want people who are going to cloud to be using DevOps practices. There was a lot of the use of the word DevOps in this to align that into the strategy. So I went and I spoke to the large team of consultants that have been brought in to create this cloud standard, which I very quickly realized way, way, way too early, because they were creating a standard for which would boil the ocean before they had even worked out how to understand and learn and deploy the simplest thing. So there was, to me, it felt like they were getting a little ahead of themselves. But I said, OK, well, if I had a compliance list, like a list of things that we make sure we care about when we deliver code, would that help you? I said, yeah, OK, well, that's kind of interesting. So let me think about what sort of things we could do there. And so we've talked about standardizing our pipelines and components. So we wanted to create some pieces that could be consumed by delivery teams for them to build the pipelines so that we were solving the problems that the delivery teams needed. And by standardizing those components, we would potentially help them solve different problems. And I will get to some pieces later on as to why this is a kind of anti-pattern of itself. But it is something that allowed us to learn from it. And that's really what we were looking to do. We also automated SAS scanning through the use of some tooling that would automate an RCI tooling to be able to go out and do automated scans and provide that information back to the development teams. Again, some of the things that we were looking at there were that, especially on some of the more legacy code that was starting to get pushed through these pipelines where they had huge amounts of vulnerabilities. It was not tenable for them really to take that code and then say, well, we're going to break the pipeline if there's any of that wasn't going to work. What we needed to do was provide them with a path forward that consisted of looking at, well, if you've got 10,000 vulnerabilities now, you can continue to push as long as the number of those vulnerabilities, the delta doesn't go up. And what we're going to do is we're going to set you targets to start to reduce those over a period of time. So we want to see that 10,000 go down to 9,000 go down to 8,000. But we don't want to see new vulnerabilities, which worked somewhat as a way of describing what we wanted to see happen. And we could automate that into the tooling that we had to be able to radiate that back to the delivery team. We also worked on ensuring that security wasn't a second. I hadn't thought that it was a very much a critical way of looking at what went through the pipe that and started to work with the delivery teams in that space. And I was started to work with security on that side of it too. So that was what I gathered together as compliance. The different practices that were necessary in order to ensure that the standards were aligned to what our pipelines that we were going to build and we were building would actually deliver. So this was going and talking to compliance and ensuring they were on the same page as us as to what that would look like. And then some of the late last conversations were interesting. And this was around operations. I turned up at a meeting where I had been asked by this team that was interested in adopting some of the tooling we were putting into the organization and some of the capabilities. And this organization, this part of the organization was like, Oh, this is this is cool. Could you like let us have access to some of these things? And I said, Oh, yeah, what can I do to help you do that? And well, can you come and present to the team like what these practices might look like? How they how they might use them? What kind of tools they can bring and how they might want to think differently by the way? Sure. So I came in and I turned up for this meeting and and then the team came in and the team was looking rather dejected and tired and they all had copy and they and the leader came in and said, Oh, well, I'm afraid we can't talk about this today. The we had a big outage. I released didn't go well. It was a disaster. We were up all night. And I said, Well, we're going to run a retrospective. And so I said, Okay, I'm kind of do you mind if I stay and listen? So I mean, I can learn a little about the team so that when I come and present, maybe we could help. And so I sat there and I listened and they they went through and they did their their problem analysis about what happened. And something that caught me about what they were describing was that they at around about one in the morning is saying that they had had to wait for about two hours to verify that the system was up. So I raised my hands and I can ask a quick question. How do you know now that the system is working? It's two o'clock in the afternoon. How do you know that your system that you deploy that you put into production that this team is owns? How do you know that it's working now? And they looked at me. Are the bank clear? Huh. Well, do you have access telemetry? Log information. And oh, well, actually, yeah, I think one of these developed. And so we started the conversation around what we really needed to do. So one part of this that came out of that was monitoring for all. It turned out that only the operations group or the people had to be designated as operations have access to the monitoring. Well, why? We need the team needs to be able to see what's happening to the environment they're deploying. We need that feedback. We need to create those feedback. So we started to open that up. We decided to work out what can we do to overcome the licensing restriction and changing the deployment models? What do we have the right solutions in place? And I started some of those conversations with the parts of the organization in operations or response with that. We I also encouraged some automated validation of the target environments because this was one of the things that's been causing the deployments to go wrong. So the use of service spec for saying that and getting them to understand that there's a difference between if I if I fire off Ansible when we were using Ansible for a lot of the automation of configuration of the target environments. If we fire off Ansible and we configure what that target environment looks like, that just tells me what Ansible did. It doesn't tell me if something changed at the point or the target state of the environment. These two things are different. So I also need to validate that the environment is in the state I expected to be because if Ansible made a set of changes, but somebody else said or something else had come and caused a different change. Say patching or something had made something else different about the environment that the Ansible script wasn't taking care of. I needed to ensure that what I expected of the target state was actually what I wanted it to be before that deployment went ahead. So getting them to look at both sides of this and incorporate service back into their their automation process was helpful in that. And to be fair, part of the problem here, there was some handbombing of target environments going on, which we also started to pull back and say, OK, we'll stop touching. So which is a tricky thing to do, of course. And then we also built out within that QA problem with the VP. We managed to, through some political machinations, encourage the VP that they might be more suited to focus on some of the problems, which allowed us to build out some really interesting automated test framework pieces. And this was a providing developers with a with a common BDD type interface into Selenium and some other tools as well in a way that allowed them very easily to integrate this into their repositories. And it was a phenomenal success and got a lot of updates, a lot of uptick from the organization. It was called SIFT and amusingly, it was supposed to be a simple integration for testing, but somebody when they were asked to be named it as somebody that forgot testing. And so the, which I thought was funny, the traceability and traceability access compliance and operations. So, OK, so I've got these different areas and groupings of things that I need to make sure that I care about in every pipeline that we build. We've got these problems that we're seeing and overcoming. And these are the concerns, these different areas, like operations in compliance with security of raising these areas about what's happening in the organization. So if I create something that corresponds to this, then why don't I call it something like TACA? And I was trying to explain this to my kids and my daughter very helpfully drew this picture. So, and if you email me afterwards, I will happily send you a copy of this picture. She's very proud of her artwork, so much so that she redrew it for me as she got older. But obviously I couldn't use this with the organization. So I came up with how do we model this? So what is this TACA? And what I was looking to do here was to create something that the organization could easily consume and adopt, and that they would be able to take on board. It met those criteria that it would be something they could consume easily and that would be presented to them in a manner that they could understand and that would resonate with those different areas. And as you'll see in a minute, that's why I came up. So I laid it out like this. So first of all, we want traceability. We want to identify what happens along pipelines. We want to make sure the pipelines are audited so that we understand the chain of custody. We understand what's happened. And this isn't about having somebody manually go in and check all these pieces, but we want to understand so that also from an operations perspective, if something goes wrong, we can more quickly understand where it went wrong. And this becomes much easier with automation. But in the somewhat complex environments they had here with lots of legacy system, lots of cups pieces, as well as integrating with new pieces, creating that end-to-end traceability across those disparate systems was necessary for us to see what's going on. So as I said earlier, we used Jera at the time for this to trace everything back to create that chain of results. We also exposed test results. So this was breaking down silos. Testing and development were operating very, very much independently. And development didn't really have access into test results. They just kind of got the report that things weren't working. So we said, well, okay, we need to change the way the testing is done. And so we started to say, well, for a pipeline to be compliant with our timeline, we want those test results had to be visible to the delivery teams. We want everybody to be able to see who's doing the delivery. Everybody on the pipeline has to be able to see this. If we've got handoffs on the pipeline to a different group and they do some testing and then tell you whether it works or not, then it's not going to be compliant and that won't be sufficient for you to move forward. Ensuring that the deployed version is tracked. So we know what version is going where that solves the problem as describing earlier where somebody manually changed things. I want to know what's there and how it's behaving and the changes recorded. And we took that access, source code management, creators being tracked, build once deploy many and make sure that things only go into those target environments through the pipelines. We set compliance around peer reviews, scanning the code, scanning the artifact, managing the data. And operations, which around validating the target, validating the quality of what got delivered through various different checks and then checking it works and watching it lie. So this created the list of like 16 things that we wanted to see happen on the pipe. And this made it kind of easy for us to go back to these other areas and say, okay, this is what these are the things that we have that we're going to make sure happens on every single pipeline that we built. And we're going to make sure that we validate these things. So I've got my auditing of these to make sure that they happen by putting in another set of values which I'm going to validate that all of these things are actually happening. So I'm self auditing. I'm creating automated compliance into my pipelines to validate that pipeline is doing what I've gone and told everybody else that is doing. And this removes this back some from the and I so I took this and I said, okay, well, how do we typically present this to these auditors and the compliance and then they have them in these big spreadsheets. I said, oh, okay, I'll present it to them in a big spreadsheet. So I created a big spreadsheet. And if and if you do the survey, I'll leave you a copy of this and I'll have it up online. And basically, I mean, it's very simple. It's just like, this is how they're looking presented. But we have the purpose of what we're doing because this is very often miss is like, why are we doing this check? Like what is the reason that we're being asked to do this? What is the control that will put that we're satisfying that we're putting into place, the control that's going to be there? What is the the artifact that will be created to validate it? Where is it going to be stored? Is that control? What happens if that control is passed and what happens if it's failed and who owns this? And then we built this out. And then for every pipeline, we built out like, how is that going to behave for these different types of pipeline so that we've now got a model that we can work to say, have we made sure that we've verified that all the things that we need to do that the organization cares about is. And we present this in a way that they were looking to receive the information. And then you can visit visualize this because I mean, we've got our different things. I can give them a score and say, hey, look, this is how compliant we want to make this a nice diamond shape. So this this kind of work, I mean, it at least it gave us the ability to have a conversation that we weren't having before. And it was really the opener to that conversation. It helped that security team was creating some new policies and this gave me an ability. So well, these are the things that we think that we need to care about. Is there anything missing from this? And then what happens when we do the when we do these pieces? What what kind of security problems would you like to see in here a solve with these pipelines? What other problems might that be? And but by doing this is this gave us a framework for the conversation that we were having difficulty having before and allowed us to bring those different groups as described into the table. And realize I talked a little long this time, so I'm going to quickly run through these next slides. So we create a model like this, which is awesome, isn't it? And of course, and I have to throw this one up because it's got all the different miles and pieces in this. But this is a horrible, horrible slide for presenting at a conference. So let's talk about it in a more simpler terms. So we still had these different areas that we were doing. So we broke it down into these are the different parts where we're going to do our controls. So how do if we've if we've got a process and these are not designed to be gates, but we have our process across the pipeline. What can we validate and build? What can we validate and test? What can we what can we validate and validate? What can we validate the deploy phase to ensure that we're doing the right pieces? And so for in the build zone, we're looking at local ID testing, tracing tick and I have some opinions about how many of these things you actually need to do. And we'll get into that hopefully if we have time later on or afterwards if people want to chat about it. And we'll go through the test zone or executing functional, non-functional dynamic security and regression testing. And we started to integrate these capabilities into the pipeline so that we had the things that were being asked for by the rest of the organization to ensure they're validated. And sort of run the pipe. One of the pieces we did was we quickly realized that the here that those build and validate zones really belonged with with the development team entirely. Testing was where for this regulated piece we had some organizational tests we wanted to ensure we're running every single time we execute pipes. So what we did was we split it into two and we said that what we'll do here is we'll one will define what we're going to do so we have traceability. We helped with the defining the work and here we started to work with the fellows over in the agile transformation space. Because amusingly enough that if you think of the floor, there was like the DevOps transformation, the agile transformation and the two didn't talk. So me being me, I went over and sat with the agile guys and chatted to them for a bit and said, hey, you guys really should be talking more because you're trying to do the same thing. And you're not going to succeed without them and they're not going to see that user. Let's all get together and like have a few beers and work this out. And it really all comes down to that is those conversations. The. So we set this up after so we create code everything as code build it into the source code management pieces we execute the first one and development owns all of these steps at this point. We started with, we'd have a peer review of build results and validate that yeah, everything looks good push the artifact that at least is how we thought with things. And the artifact would then get created and that same artifact is the artifact that then gets pushed out into all the subsequent test environments, runs through the automated organizational tests that we need to do, and then gets pushed out into the into the environment where we can validate service state and radiate that back and we can measure the whole thing and we can start to use that for improvement purposes. Over time we realized well, we didn't actually need to validate the results there we started to switch to more of a pair programming model within the more advanced delivery teams so that we didn't need to have peer review and code review at that point to validate the results we could show that that changes had been validated and viewed earlier in the pipeline. That was only for some of the teams. And, and then we realized that we, we could automate a lot more of those organizational tests we we started to eliminate as much as possible at manual tests we still have. So we had a lot of integration and manual tests still have to be done at this stage we couldn't entirely eliminate them but we minimize them as much as we possibly could, so that we could shorten the cycle time. And then auditing the pipe so we realized that we were generating a whole ton of information through the automated pipelines. And so we looked at how could we radiate them back and we, we start we tried many different attempts at doing this. And that really didn't quite work for us because being open source it works great for a couple one but then they the drivers don't keep up with the tooling and if you don't have the same versions, then you're rewriting all of those drivers and it becomes a pain in the ass. So we, we switched to looking at looking at it from a log's perspective pulling out the logs and doing radiation of information and this allowed us to show and pull out some very interesting information from initial baselines that haven't been taken before I joined the show that we were making progress and that progress was what allowed us to get continued funding from the organization, especially when we could show that we've gone from like six months stand on month for deployment processes, and even further sense. And the important part here is all of this has to be automated you, you want to automate this because this is what's keeping the auditors happy and making sure that compliance is off your back. So what we, what we were doing was we were building the paved road, and we were creating a model for the paved road, and it's set of guidelines for it, that we could say, if you come and use our paved road we've already taken care of the organizational concerns we've already got compliance to agree that this set of policies that we do in taco are going to meet the organization's needs. We've had them baked into the security and compliance policy so if you follow this set of standards, you are automatically compliant. So come use our pipeline we've done it for you. And that's great but if you want to do it yourself. These are all the things you need to go and make sure you take care of. But you can do it. I mean, go ahead. We're not going to stop you. And these are the tools and components you can use and you're welcome to take advantage of them and we hope that they help you. But we also hope you come and use the paved road because it may make it easier. And one thing was with that and I'll talk about that a little bit later as we get in the end. So, so if we learn through all of this talk what we learned a heck of a lot. We learned that we most certainly cannot solve the problems with the same thinking we use to create them that we needed to relook at our problems continuously, consider different ways of tackling them. And that having that framework itself has since evolved in many different ways long after I've left but they still use it and they still apply it to as a as a thinking model for how to approach what a good pipeline might look like. And I'm just wrapping this up quickly because I'm on time but so the What we've done is we create a common understanding with good pipeline that through those conversations with those different groups. We So we went from talking about like apples, oranges and bananas to talking about tacos and everybody could talk about tacos which meant that we could then start to have a more common way of looking at it. We could share the wealth and gauge the early adopters which was very powerful and we've got ways of automating the software delivery compliance which we built into those pipelines. And then that gave us a Rosetta stone. And that's not all we did just to wrap this along my piece. This was the starting point in the conversation that was and that initial engagement was four years ago now and they have since moved on. And I'm working with other organizations, but I still talked about group and that team about what they've done. And they're They've built out much greater partnerships with different organizations. They've run internal DevOps days community practice, but they now have a you've got a tier one bank they're doing pipelines continuously into production and using those frameworks as ways of securing it. There you go. So that's it. There's a there's a survey here. And I think all the extra pieces I add in there made me run a little longer than I have previously on this talk. So there you go. I hope that was useful to everyone to get thumbs up if there was some useful information in there. I hope you found that valuable and I'm happy to stay around for a bit and go to the floor and chat if you have any questions. Thank you so much Peter. So folks, I think we have time for one question if you can just put it into the community panel. So what would you recommend is the best way to engage with the compliance team. Well, the piece I have there is engaging them where they are. And so when when I was approaching the compliance team and is that it's like I understand where they're coming from and they're approaching it's like so tell me what your problem about what you want to see from the pipelines that we're building what would help you know that we're doing all the right things that can help you succeed. So I found out who had been assigned as the compliance officer for our particular area and I went and sat down with him for a for a coffee and I and I started to build that relationship and talk about like well what what do you need to be baked into our pipelines to know that we've satisfied the requirements that you need. I then started to break those and think about like how can we automate those into our pipeline. How can we create controls in our automated pipelines that will then start to satisfy them. I also engaged with the compliance team as well to come to the table with those conversations and and present some of those pieces they were looking for and start that dialogue in more depth as to how we could sort of build the pieces out that they needed. So we had our DevOps compliance, which was I hope they answered your question if not I'm happy to dig into it a little more for this. But I'm talking to them was kind of the simplest way. After we had built out the taco model and we had used that to integrate it into the policies that were being created and do it then creating the alignment between the two that made things a lot easier but I had to engage them first to get that underway.