 We're going to start. All right, thanks, everybody, for sticking around. We are going to present Unpredictable Continuous Deployment. And we're going to talk about all the painful lessons that we've learned along our journey towards a predictable continuous deployment with Drupal. And how to hopefully inspire you to make your environments and your deployments rock. So the presentations divided into essentially three sections, lessons in doing things the wrong way, components of good predictable deployment. So we're going to share some of the things that we started doing wrong many, many years ago. And hopefully, it will be a group hug. And you guys will be kind with us and remember some of the pain points that you've gone through, some of the ways in which we've attempted and are continuing to attempt to make that process better for everybody involved. And of course, how we do it with Drupal. So what this presentation is not about. And if you came here just for this, I will not be offended if you decide to spend your time elsewhere. But of course, I hope that you stay. We're definitely not going to talk about tools. There's a lot of tools discussion. There have been a lot of tools discussions here. So we are not going to specifically provide tools, code examples, definitely no live demos, as you have seen in our 10 minute slide presentation set up, we're not going to try to attempt that right now. And we are not going to be very specific about prescribing the frameworks and how you do that. There are many, many ways of doing that. Obviously, there are sponsors here that have created an opinionated way of making this easier. We're not going to provide an opinionated way of doing it, which is going to talk about the processes that we've had to go on through and an approach that we take to make that happen. All right, so about us, I am Andy Kuharski. I am the president and founder of Promet Source. And I make it really easy for you to follow me on Twitter. It's a Kuharski. I know you guys all know how to spell that. So I'm just going to leave it at that. I'll put it up at the end of the presentation. I'm Johnny Fox. I'm the CTO of Promet. I've been with Promet for six years. I lead the team that has suffered with, struggled, and developed our continuous integration process. We're not sure about sharing our pain, huh? So a little bit about us. We are a full stack and full service development, Drupal development shop in Chicago. We're pretty distributed. We also offer services around training. So we have a training practice and as well as accessibility testing. The one thing that we started doing very early on, and we've been working with Drupal since 2008. So we still have a Drupal 5 site running in production somewhere out there. Yeah, thanks, somebody's impressed with that. We have taken on support of Drupal websites that we did not build from the very beginning. So we have seen things. We have seen a lot of things. We have taken on sites that we did not build. We take on sites that are in any environment, any hosting environment, internal hosting environments. So we have learned and we had to adapt to that reality. That is something that we have chosen to do strategically. And as a result, we have seen a lot of different, and we had to adapt to a lot of different ways. Or a lot of different ways in which Drupal sites were built and we had to adapt to our clients' needs. All right, so who are you? We want to know about you. Which Lego, which of these Lego characters represents you? Please raise your hand if you are a developer. Ooh, that's a majority in here. All right, please raise your hand if you are a systems engineer or ops person. Okay, that's pretty decent. And sorry about this characterization. If you are a product owner, a project manager, a business owner, please raise your hand. All right, I will raise my hand. That's me as well. Well, thank you. I had to throw in some logo pieces and a slide, right? That's necessary. So why should you care about CI and CD? Obviously you're in here and you care about it by attending these sessions. We're not going to define exactly what it is. Michelle Kretche's earlier presentation did a pretty good job of describing that and her journey into learning that. I highly recommend that. Development obviously cares about deploying building code and deploying features quickly. Operations cares about maintaining current state and making sure that uptime is maintained, that sites respond quickly and not many changes cause the beepers and the pagers. Ooh, I just said beepers. I'm old. Your phone to vibrate, pager duty. That's a good pager, right? So if we had a magic wand, right, whether you're an ops or a product owner or a project manager or a developer, you could walk up to your desk, say, Alexa, please deploy version dev v2.01.5 to my staging environment of this project and it would just happen. I actually have seen this happen on a demo. It did work. It was a live demo. I'm pretty sure that was a live demo. So that's the magic wand, right? And where did we start off? Where did we do things? I've been involved in deploying software for about 20 years, obviously prior to droop over deploying enterprise systems, but custom-built applications and for about 15 years, websites. So this is where you guys hopefully will see have suffered. Well, hopefully we'll display public offering or suffering that we have experienced deploying software manually, right? So do you remember those days ever or has anybody had to move files around from one server to another, making sure that you got the right files and you didn't script it and then you forgot a file, you got a long version. That was not fun, right? Running database scripts. Or have you ever run into testing your code in a environment that you think is just like production? Johnny, do you remember that one time where we were running that, we followed all the right steps. We thought we had the exact replica of the production. We asked the client for the Nginx version, sorry, it was an Apache version. The LAMP stack was exactly the same. The project had a tight deadline. We ran, we pushed it to production, big unveil, total fail. Like, wait a second, why? We tried to take these steps. We banged our heads against, hopefully our presentation of this play would not fail. We banged our heads against the wall for a while, only to find out that our client was running a LAMP stack emulator on a mainframe. Did not provide us with a testing environment. They said, don't worry, it's gonna be the same. No worries. So we've got some scars from that. That's why we're very passionate about making sure that the target of your testing environment is the same as the test environment. And more, has anybody ever had to receive a push script that's in an email that instructs you this manual clicks that you have to take? Okay, I know you guys are way above that. So hopefully we'll share our pain. That has happened to us before. So we really try to make sure that we don't do that. So what are the principles around what we've been doing and how we try to make our teams' lives easier, better, and funner? It's not a word, by the way, more fun. So we wanna be able to make sure that the package itself is tested. The package of release is tested. So it's reliable, right? It's reliable and repeatable. There's no better feeling from project managers or product owners or talking to a client that says, we're going, we do a build from scratch every single day. I know that this software will build. I know that it will run because that's predictable, that's reliable. We have a saying around Promat that says we love lazy developers. We do, and what we mean by that is that we love for everyone to automate stuff so you don't have to do the mind-numbing things over and over again. So automate whatever you can as often as you can. Obviously, keep everything in version control. We'd also like to bring the pain forward. So if it's something that is painful, let's make sure that we have a good retro about it. Let's talk about it. What are the challenges to doing the things that we need to do, whether it's an environment version or whether it's having a mainframe run on your local, how can we get past those things? Those need to be brought up and the risks need to be identified. We need to talk about how we're gonna get around that or what happens if something goes wrong. Build quality in, so everybody's job is to build quality in. Testing is obviously a big part of what we do, so we're gonna talk a lot about that. And continuous improvement, right? So make sure you listen to everyone and everybody. When you do your retrospectives, it's a safe environment. It's a learning environment and everybody should be encouraged and understand they can speak freely without feeling like you're attacking somebody because that's where you really learn the lessons that need to apply to the things that you're gonna do to make things better. So we have a number of resources that we look at and we pay attention to. I also want to emphasize that dev ops is a big cultural thing and it makes things better in your culture, right? When releases go out without stress and when you buy into the process, things culturally are better. This is from a study that was, I think released last year, talks about organizations who adapt. Dev Ops practices, fantastic, okay, great. So great results, unbelievable results, 200 times more code releases for high-performing organizations. This is a tongue twister for me, the other point from the product managers. Teams that adopt high-performing dev ops cultures are twice as likely to recommend other people to work with your company. So it's worth buying into that and if you need to convince somebody that we need to spend more time on building your automated testing or better push process, feel free to find that study. It just, this talks a little bit about what high IT performers are. So being able to deploy frequently that also has to do with making sure to deploy. Don't try to do the big monolithic deploys, limit what you're deploying so you can be testable. Failure rates are better, mean time to resolve and so on. All right, so we're gonna now talk about Predictable. Is that from it? All right, thanks, Andy. First I wanna apologize for the title Predictable Deployment because you can still have Predictable Deployment without continuous integration. I can predict you're gonna fail equically. You're going to have some days when your server is just a smoldering crater of why is this not working? The classic, it worked on my local why is it not here? That's Predictable. What we're really talking about is how do we get a Predictable successful deployment? And that's a struggle. There's some kind of basic tenants and my goal here today is to pass on what we adopt as those tenants give you a little bit of history. One of those is that you have a formal workflow process that we use the local development, dev staging, production workflow. There are a couple workflows where we have where we actually introduce another step of a client QA environment. I know that there's a couple of the platforms as a service that offer that to be able to see individual releases. Where we started was, as Andy mentioned, 2008, we saw the need that as we were deploying that we needed to have repeatable environments and we needed to have a way that those environments were the same. So in 2010, we were one of the first people to come out with Chef, which is Chef is a server configuration management tool. It can also be used to configure applications, but this was very early when we were starting. Only start. What I wanted to show is a little bit is that as you're on your continuous integration, continuous deployment journey, don't be afraid to get started, like start small and build on it. I know we have a lot of developers so a dancing analogy doesn't work, but you learn one step and then you learn the next step and you build on that. So as we worked over time, we kept adding a piece. So very much kind of an agile, you say, well, features and Chef, and now we need some build scripts and now we need get flow. And in here, you can see we changed configuration management, we went to Ansible, we've used different places alone. So what I really want you to take away is start somewhere, start today and add pieces onto it. In our environment, you need to be learning as you're working. From the continuous delivery book that Andy showed, we've adopted some CI principles. There's some of these that I think are, all of these are important and all of these are goals of what you should have. You can't have continuous integration without revision control. I can't imagine anyone doing development that way. You wanna automate the things. We've already talked about testing in the clone of production and really testing in the clone of production is all the way up and down the stack. You want to replicate that environment as much as you can. Frequent commits, putting all the code together. Our team is spread out across. We also have an office in the Philippines. We have a development team there. We work with people all over the US. We're spread out. There may be multiple people working on a project and just having one person's code sitting there not knowing whether or not it builds is a real issue for us. So we use HipChat as our platform for communication across the company. And that addresses code consolidations, build availability, and test availability. What I'm showing here is inside of our HipChat we're actually able to see and for each project there is a room that's there. So our commits from GitHub when the project is being built, if that succeed fails, succeeds, we're able to see that immediately. We're able to see the results of the test. Those links are all piped into there and that happens throughout the day. So if you're a project manager you don't have to go call all the developers and say like, okay, is that okay on your local? Okay, call the next person. Say you're working on this other feature, is that okay? You can see the status of it in real time. As a developer, as a solutions architect you're able to see what's happening as well. So some of you may be thinking, well you may have a fairytale life, you may have been very good in the prior life and you're very blessed and you have really the same environment you deploy to all the time that you're deploying to Acquia or to Pantheon or you work at a place where you have, we're all on CentOS, we know what version of PHP we're on, we don't have to work like that. If you did something very bad in your prior life you may wind up at an agency that supports many websites, Promet supports hundreds of websites. I wanna share with you a little bit of the complexity that we run into, so we wind up with a few operating systems to support. And the great thing about operating systems is not only, it's not just Linux, it's which flavor of Linux? Is it Debian or Red Hat? Is it SUSE, Oracle Linux for anyone that's not worked with it is different even though it's on Red Hat? We also support client sites under Windows Server and then you got on a layer of well which database are you working? How do you optimize that database? If we wanna be a little bit more complex then we wind up with the hosting platforms, could be Google Cloud, could be Azure, could be Acquia or Pantheon and all of these are great platforms. I dream of a day when we all deploy and then that there's not variability between that. One of the things we see is that for each one of these platforms, there's a set of different hooks, so spinning up a server on rack space is different than Linux. That metaphor doesn't apply to platform but we still need to develop our Drupal so that we can deploy there. You have the additional complexity of Terminus is not the same as what you do on Acquia. Some of the platforms have modules that are specific that need to be ran to manage varnish on their environment that's different. This is a really kind of wicked problem. So to make it more fun, then you can layer on what services are you connected to. We've had really a lot of fun with Shibboleth. It has to have a daemon that runs in Linux to connect to. If you're not testing this, if you're not looking at that, you can just expect the client's gonna call and say why can't I log into the server? What was that change? Same thing with varnish, maybe you're working on a platform, a client controls it. There's a varnish upgrade from 2.2 to 2.4. I'm just sure with you the VCL files are different on those. Like you have to be testing. So it's a really complex problem for every different client. So remember, like a couple hundred projects to support. And all of these involve different sets of combinations of these. So maybe one CentOS, PHP 5.6. Some clients are still on 5.2. I know it's crazy, but you may be in a large enterprise environment. There's security concerns there. That's just the way they run. They're gonna be on 5.2. No button can change it. And that's what you have to deal with. So to address this on our side, we started out with, we realized that well, we just need, it's just a Drupal, right? Drupal is just PHP and I need MySQL and I need a web server. So Map, Zap, one of those solutions. Like that'll work. Pretty soon we got into, well, but you're working multiple projects. So you need to be able to have something that's portable that replicates that environment. So we, virtualization came along. We went to virtual box. Just almost immediately pulled Vagrant on top of that so that we had a little bit more control of the configuration of boxes. Went through a round of what is quick in Vagrant. So with Vagrant and Drupal VM, one of the challenges that we have is we have an office overseas. We have people that work remote on places that have slow connections. Anything where you're loading the entire box and the whole stack and that has to be built each time you build the environment that can add a lot of overhead. If you're in the U.S. and you have gigabit internet, you're set. If you have a 10 megabit connection and you're in Asia and you've got to pull something down, you're talking about something in the U.S. that could happen maybe in 15 minutes, takes you four hours. So today what we're using is Docker. We found that's more portable. It's quicker for us to use. And we'll see what comes next with containerization. Second part you really have to have is a CI server. There's a number of these. They're all, I think they're all good choices. I have trouble, we made a choice internally because of what our skills are. We have a dedicated DevOps team. As you've seen, we have kind of this complex topography of things that we deploy to. So we need really granular control over that. And we have settled on Jenkins. We could have an argument about which one's better. I say pick one that fits your organization. Testing, why do I have testing here? So continuous integration without any testing is just automating your failures. It does you no good to build a server and no to build your application and not know if it succeeds. And I'm sure no one here has had when you build the Drupal and I've changed something on a few pages. I need to make a few menu tweaks. I've moved a block around. And then I look at that, we test that, we deploy it and there's some other page somewhere that is totally broken from the result of those changes. Anyone have that? Yeah, I mean, you're in Drupal. All of you need to raise your hand. Like I'm not believing that you really have to have testing and that is the fantastic piece of this because clicking through hundreds of Drupal pages is a waste of human life. It's terrible. Humans are terrible at it. Even on the best days, it's slow, it's expensive. It doesn't add value to your project. It doesn't add as much value as automated testing. There is a place for it. So we use a combination of testing and as I was mentioning like on this evolution, we keep adding steps. So we started several years ago using the hat framework for testing. We're also using the robot framework for testing now. Those are behavioral testing. So I'm able to click on different pages, go to those. Visual regression is something that we've added and those tests need to be available. Like I need to know before it's Wednesday and I have a push for the client on Thursday if the build is succeeding or not. We're working on a couple of weeks sprints as a project manager, as a developer. Like it really doesn't help me to go two weeks, think everything's great and not. So you need to have those tests really quickly available. Bahat is, let me just go through this. There's a Drupal library. It comes with many of your already constructed tests for logins, basic Drupal navigation are already built there. There's a working group in Bahat that's actively maintained. People are passionate about it. It's one of the reasons we developed it for projects that we are developing new. Bahat is kind of our tool of choice for building. And behavioral testing. Bahat is, the language is Gherkin so you can see just at the top it says, you know, this is a feature in order to do this. As a user I need to do this thing. You put your assertions in there. It's a plain English language. You have very, you know, this is easy to read. This is available from inside of your Jenkins console and it just reads that out. You can see every time that this test is run, you can see that this happens and even if you're not working on this piece of the project and it breaks, you're gonna get a failure and you're gonna know about it. Sure. Often when we talk about investing in automated testing and writing the test scripts and writing test cases when we build projects, we sometimes get questions about whether it's worth it. Is it worth on my project? Is it worth to invest that time up front to do that? And looking back on what we've done, unless you're going to build it once and one day and then never touch the site again, ever, the answer is yes, it's worth it. Good point. This is the Jenkins dashboard. If you're using Travis or Circle, it's going to be different, but you're going to have some of these elements. You're gonna have the console output. You're gonna be able to see, you know, your source control is tied into this. Your source control gives you the commit. We can see what's there. And in this case, we're building a single, a single commit is building. So you're very granular. Really important that you're doing these builds frequently and that they are granular. If you wait two weeks to see if everything builds, you're gonna be going back through doing a bunch of cherry picking and like removing modules to see where did it fail at. If you have that build test early, then you're able to spot that. So test availability, we actually build, Bahat will generate an HTML output of how did your test run? That actually gives you a report. We link that into our hip chat room and that gives you the ability just to immediately see that the test ran, click on it, see where, where did that happen? I'm showing failures because failures are really your important, that's what you're wanting to catch. I know as developers, we all wanna lay down perfect code. It just doesn't happen that way. So what we need to know is we need to have that feedback and we need to have it often. This is part of that same Bahat report that we saw earlier, but this gives the individual scenarios. It tells you how many of them passed, how many failed, how long it takes to run. It's great. So there's another piece of this. So talking about layers of testing is we also have code sniffer connected to the project and that checks for Drupal coding standards. Code that is written well, code that is easy to read has less bugs. If you've read code complete or rapid development in any of those books and you've worked like it's just easier to debug code that's written well. So automatically it will come through, it will check your project for just layout and having worked with a lot of projects that come in, we see different kinds of coding, we see stuff that's just ran together, it's hard to spot. Having some way to know what state that code in is good and it keeps us all honest as developers when you're committing that you go like, oh, I failed there. We also, so Andy also talked about we have projects we didn't write. Those projects don't come over with the hat test. We may not be familiar with that. So we have a dedicated QA team that QA team uses robot framework. It's a project that's available. There's a blog on our site about how to use it, how to install it. It has a graphical interface. You can go in on the front and build tests. So it's very suited for QA team to run and go through and do a behavioral test to run on the project. On projects that we're supporting, if we do additional work, we'll build those in this test suite. It gives us a way to give code coverage on those. It's not necessarily dependent on us having done the development, but it gives us a way to check those. It's just an additional layer. So we also have manual code review. I think this is an additional testing piece that's really important is not only have I written the test for it, but someone else looks at it. Did you understand what it's supposed to do? Hopefully you've written the test beforehand and that test has been written and looked at. For our entire company, we use GitHub. We use GitHub for all projects. So if you're on BitBucket and we have to work with you, we get to work with you. We get to work with you. We're going to develop in GitHub and we're going to do a commit to BitBucket because the way our process works is everyone's projects, everyone's code, anyone can go look at anyone else's code. It's expected, demanded, encouraged that you look at other people's projects. One of the very early projects I led, we had an issue where in a group, there was some coding issues and just because there wasn't eyes from the outside, it's real easy to be inside of a project and not see what you need. So we added this code review piece manually and that's shared with the entire company where people can see what's going on there. If I clicked into these, I would also see comments inside of GitHub that the developers are making. This will give me, as a project manager or an account manager, it gives me the ability to see what's going on there. The next layer we've added is we're using a tool that we've written internally using WebKit and Phantom.js. Those are really great tools. So especially for sites that are very large or complex. As I mentioned, humans are not great about, it's just terrible to have a list of 100, 200 URLs, you've got to click through them on production and you've got to click through on staging and see did anything change there. So what this tool allows us to do is it allows us to feed the sitemap XML from your Drupal site in, select the pages we're gonna test. It will test it before and after so it uses a reference copy and what it does is it just literally does a full page screen capture and then does a diff of that page and highlights it. So the kind of things you're gonna pick up are, you know, you're gonna pick up blocks out of place, you're gonna pick out color differences, you're gonna pick out when the Twitter feed updates. It also has parameters where we can test for different viewport layouts. So we're not emulating mobile but we are able to pick the mobile viewports for tablet and for mobile and test all those and that's done automated. It's been a great time saver on our QA team. It's actually created a lot of open time for that team to take on some new initiatives. But just, you know, and it delivers more value for the clients and you can get that report. You know how the site looks when you're completed without having to wait for it. You have to wait for it a little bit. So next steps, we are working on some internal initiatives for accessibility testing. Is that offering, is automating it? If you've worked with accessibility testing, you know that automated tools do not catch everything. However, what we're looking to do is something very similar to the visual regression test that we can run a tool, get a report afterwards and see where the project is. So that's the next one. Additionally, monitoring. So why is monitoring good? I know that none of you have ever had a push where you introduced something that ate up more CPU cycles or maybe had a loop in the background. So as a standard faucet of what we're doing is we install monitoring on all of our dev servers so that we can see, we set that up, we'll put warnings in there in New Relic. Even the free version of New Relic will allow you to set alarms for CPU usage, database usage and let you know when it's over a certain threshold and that's helped us catch a lot of issues. Deploying artifacts. So I'm borrowing also from Michelle Creche's. I see that Acquia has also adopted this in some of their new pipelines and that is deploying artifacts. So with all of those different server environments that we're deploying to, what we found is that if we try to deploy into those environments and we're deploying all the tools, you're deploying Bahat, you're deploying Composer, you're deploying Gulp, you're deploying SAS, that's a whole stack of dependencies on the server that you really don't have to have and it limits where you can go. Sometimes you don't even have the ability to go into the client server and see those things. So we're working with, if we're working with Bitbucket or we're working with Acquia or Pantheon, we're connecting to their get, it allows us just to deploy only the Drupal. Then I only need the Drupal things to deploy. I need all of my updates to be in code so I need to be using configuration management or features if you're in Drupal 7 and we just deploy the Drupal. It's a little bit of learning to get that and it does divorce some of the code history but we found this to be a way to allow us to work over that multitude of environments and just deploy only the Drupal. I've compiled a list of resources that we use internally. These are all very good. It will take you a while to read through these. Don't think that you need to go use all of the resources. Today what I recommend is commit to doing CI, get started with it, start small and then learn those next steps. All of these books have been referenced so those are things that we use. And then Andy, I'm gonna let you just wrap up. Wrap up. So last week we're heading out to DrupalCon and we're kind of polishing our presentation. I stopped by Melissa's desk. This is Melissa. You can see she's not miserable like we talked about her previous experience. She's happy. Said, Melissa, why are you so happy? So I had five different deployments this week every single day, different clients, different environments. Everything went well, everything was great. So it can happen. If I encourage you to follow some of those practices, we are asked to display the Friday sprints and please evaluate us. As I mentioned, we are trying to make it easy for you to follow us on Twitter. I've also posted these slides at the bottom of the session description and we'll open it up for questions. Thank you. Questions? I have one. Yep. So sometimes, and you mentioned this at the start, that you have these kind of like less than ideal scenarios. One of the problems I work for an agency, one of the problems that we have is some of our production servers for some of our clients are behind VPNs. And the current situation is we end up having to manually deploy code. And some of these VPNs are kind of like you have to authenticate with Google authentication, things like that. Wondering whether you have any experience or potential solutions for things like that. So the question is, I have production servers which are behind VPNs and how do you deploy where you have very little access to those environments? Did I get that right? Yes. Johnny? We have that exact experience. I think we have to go through a Cisco client for one of our clients employed through the VPN. And here we adopt a strategy of just the best we can. We replicate their entire environment, do a deployment there ahead of time. Then through the VPN, we have a set of scripts on the other side that run to deploy so that we can automate the deployment on the other side as much as we can. But because of the way the authentication is built, there's no way to connect our CI server so that it can reach into their network and do the things that we need to do. It's a wicked problem. I wish there was a better solution if you have it. I'd love to talk to you, but I have had to do deployments before where I cannot touch the other environment at all. And we had to have a go-to meeting share and have the client run the deployment from the other side. It's horrible. Hey, so you discussed a lot of the tools that go into doing continuous deployments. Anything from local to the server and actual actually hosting. Can you give a quick rundown of what a typical from feature to actually needs to get deployed, like kind of a quick rundown of what that process actually looks like? So the question is, I think is, what is the workflow from I've created a feature and tool like this? There's a task that needs to get to production on a Monday. So, take it up the feature in Jira, the developer does the work, writes the test that goes with that work, commits that to a development branch. Our development servers are built every time there is a commit. So those are being built continuously so that we can see what's going on. Most of our projects are built and one of the things I'm looking for is clients that are going to be, we're gonna be able to work with and deploy continuously in production. We don't have that yet. That takes a certain level of maturity. So what we will do, we will tag a release and when a release is tagged, our staging server will see that there is a tag, go out and build that onto the staging so that we have, it'll pull the production database into staging, build there and that's when we can run regression on it and then the production process is a manual piece from Jenkins is that we can deploy to production where we have access but you have to manually select, give it the branch that you're going to deploy. Gotcha. Most people are pretty rightfully so afraid of when do we schedule and do production. Usually there's some timing about how do we take backups, make sure that we're not going to make a mess of things and time that when there's testing available. Okay, thanks. I have a question regarding that we had, you shown in the one slide where you were showing like how many test cases has been executed, how many passed. Like is there any way through the, we can basically get a feeling how much we are covering through the BHAD? For example, like we have a site that might be having maybe 15 to 50 pages. Okay, how much and maybe 15 functionalities. So is there any way like we can say, okay, my BHAD is covering this many functionalities or that much, just looking at them or we are just relying on the tester to write the BHAD. Because here somebody can write the five test cases also, somebody can write the 50 test cases also. We're not using that. I think there are some other folks that I've talked to that have used for code coverage. We really look for, these are typically sites that we're building. So we know what the use cases are when we're starting. So we cover all of those use cases and then as we find places. But I don't have something to run and say, you know, you've covered 80% of the code. Okay. And how do you manage the configuration for the different environments? Like for Dev, like we have some different settings for production, we have different settings. Some are there in the database, some are there in the code. Like how do we manage them? Because like in my case, like we are really struggling. Some there, sometimes we have credential in the database. Sometimes it's there in the code, which is exposed, we don't want to do that. But at the same time, we want to make sure that we should not copy the production configuration on the stage environment, something like that. So the question is, how do you manage deploying code to different environments? And I think this is around the question of maybe in development, I wanna have develop enabled and I wanna hook up to the authorize.net test gateway, we use a environment variable to pass which environment we're building for that will look into a manifest and tell us the settings that need to be set for module configurations and pass that as a separate script. And where are you managing like those config, where are you storing like, whether it's a part of your configuration file, setting.php file or like you're just keeping somewhere on the production so that nobody should be having access to it. Well, it never goes to production because we're using the two repository status. So in our actual build, we have the Drupal is located in a folder below the project and above the project is all of the configuration files and tests that run. So in our GitHub repository, all of those are stored in code in a XML file and then we read it back in. Okay, it means like the part of your code which is there in the GitHub. Yeah. Yeah, okay. And is it encrypted or is just? No. Well, if it is, depends on what information it is. So we'll use a secret key store if it has to do with payment data. Most of the time those are internal to Promet. So we only have access, you know, only Promet and ploys have access to the repository. So we assume that that's a safe environment and we also have, we use Ansible for configuring our servers. So if we do have a change in personnel or we need to move or add people from the team, we can organization wide make those changes on access. Okay, thank you. Well, quick one. You said you spun out, you built your own visual regression testing framework. Are there no kind of existing CI tools out there for visual regression testing that you feel work? There are some good tools out there. We used Apple tools. We've used some of the others. We want to have our vision for that tool is to have a dashboard where we can see the entire history of the project and we wanted something that we could control at a lower level so that we could see what did it look like last month? What did it look like last week? What did it look like each build? And when we looked at that set of feature stacks, we didn't see anything that was currently existing. The script is actually pretty simple for what it does. It's just using a web driver, grabbing that screenshot and doing that. And it worked well for us. It doesn't select region. So it's like say sometimes you'll get a slideshow that will fail because it has a different image on it. But it's really great where you have the kind of topography we do where you have many, many projects. You may have a project you come in and you do one thing on just to see, pass, fail, what happened. Are you gonna open source that? Is that great? I will share that or tweet that. I'll have Molly tweet that. But yes, we built on someone else's work and if I could have found the URL, I would have put it here. So thanks. All right, thanks everyone. Thanks.