 Yep. Hello, everyone. Thank you for making the second day of TripleThous. Can you hear me okay? Yes? Okay, great. This talk is going to be a fast-paced talk. I'm sorry, I have a bit of an accent, so if you don't understand what I'm saying, please raise your hand. I'll try to repeat. There's quite a lot of material to go through, and I'll try to do my best to go through it. So it is a local CI production workflow with TrevOps. I'm going to jump into the problem space straight away, and so the problem that we're going to talk about is about project maintenance and, well, how do we do Drupal projects and how to add DevOps support without, you know, killing yourself and done spending lots of overtime. So the starting a new project would usually look like, you know, you would scaffold Drupal, download it, or using some scaffolding tools. You would configure some tools and configure CI, configure deployment, and you'd write documentation. If you already have another project that you already have existing project, you would read documentation if someone left it for you. You would get a database from production, configure local environment, configure some of those tools that you want to use, like code linting and testing frameworks. You could configure CI, and you would configure deployment. If you know how to do it, it's great. If you don't know, well, you basically have this situation where, you know, you haven't started to deliver value yet, but you already kind of, you have to do quite a lot of work even before you start doing some features. So what can you do? How can you solve that? How can you minimize the time to have a project set up before you even start doing the project? Well, this is something I have identified and I have tried many times. So you can do it from scratch every time. So you basically have all the services and, you know, binaries, whatever, whatever you're using, your Lando, your stacks and stuff. You connect it. You connect it every time from scratch. Or if you have worked on the previous project that has those scripts and whatever, you can just copy it and adapt for your current project. But what about your fellow developers if they don't have access to those previous projects? How would they copy those, that glue code? Or you can use some scaffolding tools and add some additional things, like Drupal scaffold to start your project plus then add additional things. Or you can use a project template and the project template in this case is something that has everything included and you just kind of clone it for a new project and you start from there. The concept is not new. It's just the implementation of that template is what I'm going to talk about. So if you don't have this, basically this is how you, over time, start sleepless nights. That's what I came through and even most sleepless nights. Trevops is something that I had come up with in my last, sorry, two years of participating in different projects in government and other ones went through. It just happened that I went through about 50 websites, setups and onboarding every time I had to do the same thing over and over again. There should be a way to automate all this and improve it so you don't have to spend time on connecting things. So Trevops is the name. You either love it or hate it. It's, as you can see, the drop and the whale. So your Drupal and your Docker plus some tests and some automation and some love. So yeah. So what is it? The meme thing that I like. So, you know, DevOps for Drupal knows basically Trevops. So yeah, it's a DevOps plus Drupal, Trevops. So the name is there. Right. So what is, what is the thing? It's actually quite a lot of things. And on the project page, there's a huge table explaining everything, every single detail. But in a nutshell, it's a Drupal project scaffolding template. So just files laid out in a GitHub that you can just download. It also has a Docker stack configuration attached. I'm going to talk about it later. It has CI configuration and tools. It also has production and hosting integration configuration. So if you have, well, I'm going to talk about it, the types of integration, but well, currently it's Acquia and Lagoon from amazing supported. So those configuration already provided. So you don't have to configure anything. Documentation repository is quite an important part that, you know, we never have time developers time to document things. So having some kind of template for documentation that's just available out of the box is something useful. And yeah, the kind of most important part is Glucco to keep it all together. That's something I'm going to stop in a moment. And what is not, it's not a replacement for Lando, DDEF, Takaido or any kind of other local development stacks. However, it has a full featured Docker Compose support. So you can develop and I've been developing sites using that. So without using Lando, DDEFs and Takaidos. Not a custom Docker image repository. The project actually uses pretty cool and stable, amazing I O Lagoon images because they are production grade images. So something that you, I mean, there are hundreds and hundreds of sites powered by those images and they are stable and they are production grade. So you can use them as well. It's not a host provider and not a CI system. It just uses a CI system. They exist in one and it's not a, it's a service. It's just a template. The approach. So I had this dream a long time ago, identical environments when you have your production and your local and then you have UCI and, you know, they're all equal. And before Docker, it was VMs and other stuff and it was really hard to achieve this. With Docker it's quite easy now because, I mean, all you have to basically have is a UCI provider and your local host OS and your production supports Docker's. And these days it is the case with the providers that out there. Just to give it a little bit credibility to what I'm saying. There's actually about 30 sites, custom sites being already set up using this. GavCMS 2.0 and single digital presence department of cabinet and premier of Victoria's two other platforms that use similar approach, but it's kind of, it's the same identical environments idea and it does work. There are agency, there are vendors that service government agencies that are on those platforms. So those vendors like developers like us, they can just take the glue code, this whole thing and just use it. I'm going to have a demo about all of this. The main features are this. So you have a pure Docker implementation. So this is something I was going for. It's very important if you have looked into other implementations, some of them like Lando, Takaido, DDEV, more of them. Some of them generate Docker compose for you, which is, again, you either love it or hate it. In this implementation, so pure Docker. So if you know how to deal with Docker, how do you do Docker compose, it's all there. I do recommend people to just learn Docker compose. If you learn it, couple of commands, you know how to deal with it. Flexible configuration using variables. So there is some business logic sitting in some scripts. So depending on the type of the workflow you're going for, you don't have to change the scripts. You can just flip a couple of variables and it will change things. An example would be if you are using, for example, if you are working on a brand new site and you don't have a canonical database yet, you don't have production, right? You would want something like install every time from your profile. You can flip a variable and that would work straight away. Or if you do have a canonical database, you can pull in from production using useful command and that will be important every time you build your project in any environment. It is identical commands in all environments, which means that whatever you, the way you deploy and build your site locally in CI and in production, it all uses the same logic. So if you want to add another cache rebuild or something like that, you can just add it in one place and it will be exactly the same order of commands and everything in every environment, which is quite important. In the early days, like five, 10 years ago, it was always like you would need to keep everything in sync in every environment, everywhere you do the deployment order of commands, you have to keep it in sync and then if not, you would have to try to find the why something failed in CI but does work in local or the other way around or why something's failing in production but passed CI. So that was crazy. Now with running identical commands, it's all resolved. Self-tested. This is a very hard thing for me to achieve. So what self-tested is talking about is the drive-ups itself because it's a template and the whole idea there is that you use a template to save time on boarding, you actually want the template to be stable. You don't want to have a situation when you rely on a template, you don't budget for on boarding because you know it's going to take you five minutes and then your template fails because it has bugs. So to avoid that, you test the template the same way as you would test a website and I'm going to show the next slide about it and the updatable and versionable. This is an interesting one, is basically if you are on drive-ups, you can just have a special tag, drive-ups version so the next time you want to update to a newer version of drive-ups, basically I bet your template files and glue code, you would know which one that is and you can always refer back to the repository of drive-ups. Yeah, just coming back to this test, self-tested stability. So drive-ups itself runs end-to-end build test deployment on every commit. So what you're looking here right now is the three jobs in CircleCI provided out of the box, you download your database and potentially cache it and then your build and test which is a second job which is actually run parallel containers and then a deployment which can deploy to one of your other places and this is how your consumer site would run, the site that uses drive-ups. But then drive-ups itself adds more. So when I do development on drive-ups that adds these three things. So every time when there is a change, not only the tests for drive-ups run but also the production ones run. So you know definitely the template is working. What that has helped, there were some issues like with the images or last one was this curl version was breaking something and this stack picked it up and was able to debug these things even before the actual end-to-users developers. So that was very useful. Now the nitty-gritty of this, what is actually we talking about here? What is this template? Just because I'm doing this myself right at this point is there's limitations but it does work for the full site end-to-end. So in your repository you would probably provide, lost my thing, you have custom modules and theme files and then the build runs and composer you know takes the core and contribs. If you have a database to import an import database and then grunt runs configuration for grunt, you can compile your JavaScript, SAS assets, the gulp lovers, there's going to be support for gulp soon as well or some other ways and basically at the end you will have a production site, production-like website built locally. So this is whole, well not locally, it's basically in the environment that you build. This is all happens every time you build whatever environment that is. And then the test part, something you can do manually if you want or you see I run these things, this PHP code sniffer checks to coding standards for PHP and JavaScript for Drupal standards. ESLint does JavaScript test for JavaScript linting and SAS does SAS linting. And then support for simple test which is Drupal tests, test runner, test framework, PHP unit which is part of Drupal now and Behat, if you don't know what Behat is check it out, it's pretty cool. If you don't know how to write tests you can use Behat as a very easy way to write tests, it's behavioral driven testing. So there is support for that. And then deployment. And I do work with Acquia and Lagoon a lot so I do support Acquia and Lagoon deployments and they are quite different. I'm going to touch on that a bit later. But this is deployment stage. So your CI would actually use this bottom part to do the deployment. And also if you don't want to do CI you can deploy from your local machine just with one command. Another part here, building a Docker and for those of you who don't know anything about Docker, this part is hard when you learn it. It is okay if you don't understand how it works and I had to go through lots and lots of hours to learn things and understand even this simple thing that when you deal with Docker containers you actually have two stages build and run times where you're building on top of the existing images and then you're adding your application code inside of a container and then you capture it as an image. But then when you actually want to run your things, that's a run time. This is where the containers starting from those images and containers actually have volume shared between them so the data can flow between containers and this is how it works in production. And only one thing that is different in local is you do have values shared, sorry, volume shared between the containers and the host. This is for the changes that you do while developing to be a sink into container and this is using just standard Docker implementations. There's no additional things there. Yep, so workflow of actions, yeah. What I'm talking about this is glue code actions. What is this? Well, it's a script. It's basically Bash scripts. It's, if you know what Bash is, yeah, it's basically something that runs, not runs, sorry, it's available as a language, but you basically can run scripts to automate things. As I mentioned before, behavior is controlled by variables and those scripts run inside of containers so you actually don't run any workflow commands or anything like that on your host OS. You run everything, yeah. And use in all environments that I'm going to show that, for example, if there's no GitLab support for now, but if it's planned, so in GitLab we'll be running exactly the same things as CircleCI, for example, which is so you don't have to reconfigure it for every different provider. And why, so why Bash scripts? Well, it's portable, POSIX compliant, so even such small operating system as Alpine, Alpine is industry standard for Docker operating systems. It's just a very small thing one, even that one uses, has Bash support. So simple to maintain, again, simple as relative here. If you have dealt with any system stuff that you know how to deal with Bash, I mean, compared to Python or Ruby, which you have to learn, Bash is similar to PHP in some way. Can be used through Ahoy or Makefile, can I talk about this in a moment? And I've actually added a cheat sheet into DriveOps itself, so every time when you're using in your project, you can refer to this cheat sheet, just explain what things are and what's, you know, how to do if and then and for and loops and stuff. So it's quite useful one. So command wrapper. So now actually just moving into developer experience zone. If you would ever work with Docker, you would know that you have to remember a lot of commands like you have to do like Docker, Docker compose, minus the dash, dash build. It's just crazy. You have to remember all this thing. So there's a way around it is you wrap those commands into a wrapper like Ahoy. So Ahoy is similar to Makefile, but in my opinion, it's a bit better. It's better in the way that it is YAML. So you can see the CMD thing here is the command that it will run. Plus you've got the usage, which is like documenting what you're doing. And this is a short example of what DriveOps has. You also can configure entry point. What's entry point is in this case is basically before every command runs here, I can set some variables or read the variables from some other places. It's detailed, but this is how the whole project is configured is through .env file. It's another industry standard for providing configuration variables. So you have .env file and the variables there read by this Ahoy thing. And yeah, so and if you're a themeer, for example, and you are working inside of the theme directory, which is quite deep inside of your doc root, you can run Ahoy commands from there and it will find the correct file and you can run it from any directory. And just to compare with Makefiles, so Makefiles is another way of, it's like a wrapper around other scripts. So other versions of DriveOps had support for Makefile and it was very hard to maintain because just because of how old Makefiles are and especially if you want to do some complex things, you have to, it's really hard. I mean you can see this, even passing one argument here to a thing, you just create this kind of, you have to call subroutines and stuff. It's really hard to maintain these things. So I just move everything to Ahoy. If someone want to contribute back Makefile, please do if you want to. Now I'm going to actually fly through commands very fast. So this is the commands that you would use as a developer and this is what I use every day and some other people that on the projects that I've set up using. Ahoy build, that's the command to rule them all. That's the command that calls other commands, but essentially you, you know, whenever you check out the project as a developer, you just run Ahoy build. It will just run everything that it needs to from zero to 100 basically. It just gives you your website done, compiled, whatever just has a website that you can work on. It builds images, starts containers, install development dependencies, install the site. Then there is a start and stop stack commands. You have Ahoy up, Ahoy down. That's basically just an LES for your Docker compose, up, de-build, blah, blah, blah, blah, all that long strings of things. So up and down is where it builds the image and starts a container where start and stop is where it just start and stop container without touching your images. So you would do up and down, not that regularly, but you would start and stop if you just want to kind of save the state of your stack. You would just stop it and then you can start it later on. Install side is another command that you, you know, that supports fresh install from profile or installer from kind of database and does some post install commands like, you know, configuration import and updating database and rebuilding the trash commands, rebuilding the caches. And maybe you have, you know, you have to clear the search cache or do some other things. So this is the place where you can put all your custom, custom post install commands. Yeah. Sorry. The download DB is another thing that has proven to be quite useful because if you've been given a project, especially if you're in an agency environment, you're beginning a project that before you even started, you need to have a database from some, right? You need to. So if someone has configured before you and all you need to do is add some credentials like say for Acquire Cloud or some FTP username password, you can set those as variables locally and then you just do a whole download DB and then just download DB. So as a developer, you don't need to know that you need to go to, you know, some UI, download database and stuff. It's all handled for you with Acquire Cloud integrations actually is a script that runs through Acquire Cloud API and checks what the latest backup is available for your production and just pull that down without touching your actual production database. It's actually very powerful because you don't want to touch your production database, you know, during any time you want to deal with some backups. And if you want to the most recent backup, you just do that and you pull that down. The deployment supported different ways. You can have all of them or one of them. Webhook, what that means is when your CI passes or when you're ready to deploy things, you just, it will call a webhook that you can configure underneath. Again, it's a variable, you just say, oh yeah, I want a webhook and there are the endpoints that it will call. It's going to be a demo. I'm going to show that. A code is a very complex thing to do is especially related to Acquire, which if you guys have worked with Acquire, they have their own Git repositories attached to sites. So if you do not want to develop in there directly and you want to have your GitHub repository with everything, that's called like a source repository and Acquires will be your destination repository, you would need to compile your site into some kind of, install your assets and install all that stuff. And you want to clean up, remove all the node modules and some vendor directories and other things and push it to Acquire. And to do that, there is this package that I've built separately to this that does it all for you. It's basically, it builds this artifact as a code and pushes it to Acquire. I'm not going to touch on this anymore because it's quite extensive topic by itself. Is it a Docker image as well? Another type of deployment, if your system, the production system supports deploying from registry, you can capture your image of what your project and just push it to registry and that would trigger a deployment from your production. Now the other commands are linked. So you do a hoid link that just goes and checks all the standards, different ones. There are support for testing. So if you want to run a single test, you would supply as an argument to a command. So it's like a file name that will just run one test. Usually, why is this important is because usually to set up these whole things, you need to know how to wire up the configuration set for you PHP unit through the containers and to, you know, marry all together, right? So this already handled for you. Just have a command, just a hoid test unit test. And if whatever unit test you have written for your custom modules, that will just run. Some utility commands, a hoi CLI. If you just run and look at that, you will be dropped into CLI container within a stack. So you can run commands from the container. Or if you do a hoi CLI some, you know, echo one or something, that will just run that command within the container without dropping you into that terminal. Drush, drush commands. And login will generate one time login link. Another one important ones are clean and reset. So clean just, you know, if you have built your site and you want to clean it up from all the vendor directories or not modules, you can just echo a clean and that will kind of give you this pristine state. And the reset will just reset it as if you just checked out the project. So instead of doing this manually every time, or checking out the project into a separate directory to test something, you can just do a hoi reset. It will just reset everything. For front-end, these three commands quickly, a hoi FE, which is front-end. Actually, if you have, like I have a hoi alias to A, then the type in commands gets real fast. You just A, FE, compiles the production great front-end assets. FED is front-end development, which will do the same, but will have your CSS expanded. And you will have CSS maps that you can, if you know what CSS maps are, you can see how your CSS is linked back to your SAS. And FEW is actually front-end watch. So that starts a grant watch task that looks for changes in your CSS, sorry, in your SAS files, and updates the browser using library load. There's quite a lot of technologies there, so it's all kind of handled for you, and so you can use that. Debug. So another thing is, if you are touching the code and working with code, debugging code is some of people using develop module, but using X debug is pretty cool and convenient way. And up until recent debugging anything within containers was kind of painful, because you have to know how to, again, wire it all together and set some variables and do all that stuff. Well, it's been some poor request merges upstream with amazing guys and stuff, and so basically now it's one command. So if you want to debug something, you kind of just need to put the whole system in a debug mode. You just have a whole debug. Restart containers takes five minutes, and you're debugging. So you have all the configuration there. If you're using PHP Storm, you refresh the PHP Storm, we'll pick it up, we'll have all the configuration there, and you just debugging straight away. Put a breakpoint, you're done. Finish debugging, run the same command as you would run normally. That will just restart the thing another 10 seconds or five seconds, and that's it. So that's very easy things to do. The next one is doctor. So this one is something that came out from kind of user testing. So people were trying to use the stack and be like, oh, hey, why isn't it not working? What is the problem? Why? So docker, you would run build and then docker will spit out errors. So what's happening? So this one, what it does is a command that runs before build and after build, and it makes sure that all the tools that you need to have installed, for example, if you don't have docker installed, and you run the project when you just check it out, the project, it will tell you, hey, you actually don't have docker installed. You need to install docker, you need to install some other things. That's dependencies. And it will check all of that, and it will give you this kind of report. I'm not going to stop on this anymore. Update, and this is a cool thing. If you have onboarded your project on DevOps and you're using it, and then the new version came out, because this whole thing is not a package, it's not a composite package, it's a template, it's just files, right? This command allows you to bring up the files and bring to the files and deal with them an override on the things that you want. I'm actually running out of time here, so I'm just quickly talking about amazing Lagoon and Acre Cloud integration. Talk about this dependencies.io integration. This integration allows you to have your ComposerLoc file to be assessed, say, every night. And if there is a new version of Drupal module, ComposerLoc will be updated, and pull request will be submitted to GitHub. And because you already have CI, CI test will run. So essentially, developer comes in the morning, and there's a pull request ready, review, CI passed, so you can just merge it, right? So this is your kind of automatic updates going on, half of the automatic. I mean, if you have another bot to merge your pull request that are passing CI, you can have a full one. Yesterday, someone actually made a good example where a bot opened a PR and then merged and then even posted a GIF, so just bots doing everything themselves. So it depends. This is something that does it. I'm going to go through this. Documentation, go through this. So yes, stuff available, like remit template files and stuff and FAQ, deployment templates, onboarding checklist. Yes, onboarding checklist is something that takes quite a lot of time, especially if you're giving some old site, and you want to bring it up to the modern era. And onboard the DevOps, you can maintain the checklist and the progress of that within the codebase. So if you don't have time and if you're in agency space, if you've been assigned to a different project, someone else can pick it up. So you have this checklist and you track it. Demo. Yeah, this one. Everyone gets a demo. So okay, if I haven't been fast enough, this is the time. I'm running out of time. So what happens here? I'm going to show you a very quick workflow. What happens? So this is the site. We're going to the source and installing. So there is an installer script that allows you to instantiate and use site. This is like interactive one. What do you need to use, Pygmy? It's a special helper that helps you with Docker and then touch on this. But basically, you have to have that to start using these containers for Docker. Yeah. So this is me starting and this is like I'm acting here as a developer. So I want to have a site Star Wars. I'm asking what are the things, right? And I'm kind of filling in, there are some defaults available. But if I want, I want to override. These are basically tokens. As I said, the whole thing is kind of dumb in the way that it's not much logic happening here. It's just about 100 places that you need to replace the same things to make it all, you know, nice, the project like with your URLs and other things. So that just replaces it in the template. Use installation summary and then ask you, do you want to continue? Yes. And what happens here is it actually goes and downloads the latest version of drabops. Copying files replaces things and installed everything. So what happened now is this is UI forget. You just had all the files and this actual PHP store, I'm sorry, it's very fast, but yeah, you have all the files plus you have a core special module called core module that has some functionality and then you have a theme as well here. Well, in this case, the theme is based on Bario, which is a bootstrap theme. And actually, there are tests right here. You already have tests and those tests are actually working in CI. So if you want to write new tests, you already have a place to look for, a look at, sorry, as an examples. So this is me just adding stuff to the grid. And what happens now is I'm actually having the first build of the project that I started by using a whole build. Now, this is 20 times speed up. The first time composer has to resolve all the dependencies. It composes very slow, even without Docker. That takes quite a lot of time. I mean, 10 to 12 minutes, 15 minutes to resolve the things. Once everything is done, and actually front-end assets are built as well, once everything is done, the log file will be produced. So the next time you have to commit the log file, right? So the next time, this is how dependencies are managed in the module version. You commit that, and yeah, that's not going to be that long anymore. And this actually at the end, it provided me just now with the one-time login link and project information. So jumping into the site, this is your site, right? This is a site running locally. This is a whole site running locally. Sorry, I know it's too fast. But yeah, so ComposerLock, we just committed it. Now what I'm actually doing, I'm going to my repository creating the demo project. So this is me having, I had everything locally, yeah, now I have to actually push it somewhere. GitHub, create a project, and adding, you know, Git stuff. So now I'm pushing to a master branch, going back to that project refresh here. So this is my site, right, on GitHub. So the next step I want to do is I'm going to plug in my CircleCI. So there are plans to actually automate even this part, right? But right now, what I did, went to CircleCI, click login with my GitHub. CircleCI, because the project already has configuration for CircleCI, you don't have to deal with this, right? So it's already now. So just like that, right? Already have CircleCI plugged in, configuration was there, have three jobs running. The database job right here takes two, we're going to take four minutes, because it has to do a first build of the day, and then it caches the database. The following builds will take 10 seconds, because it's going to use the cache, so that job is going to be flying through. What's happening here, I know, fast, is me adding secrets for deployment. So webhook deployment URL, a status that I would explain. In this case, we're deploying via webhook, so we're identifying our production system about the fact that CI has passed. So when CI passes, this deployment job, which is the third one here, we're actually deploying. While that is running, I'm checking here that the tests produced artifacts, which are like screenshots. So these are the screenshots of the files the test produced. Again, the deployment happened here, and we can see the output of the deployment, and actually there was an assertion there that the status called returned by webhook was exactly what we said was 200. Now, we have everything we need. Actually, this is the readme file provided from a template. It only has these nice badges, tells you how to start a stack. So anyone now who who you give this project to to work on, any developer developers, they will have all this documentation, how to work, has a deployment documentation, FAQs, separately. So that part is kind of finished, and so you now kind of have the full-featured system with files, sorry, with integrations and tools and whatever you need. This example, I'm just running an example for a Hoylint command. I probably have to stop it for a sec. So this is me running the Hoylint. So I just want to check the coding standards on my project. It's kind of speed up twice, but what happened here is it checked front-end assets, back-end assets, and said it's okay. I'm now going to introduce a broken standards, a line that breaks standards, so something's missing. Run it again, and it's going to give me errors. And that actually failed, so the output of that was one, so it would fail in your CI, it will fail everything. What if you were given a project with all of those things, you just want to bypass them? You still want to scan them, but you don't want to fail anything. So there is a way. You can change your variable. This is what I did. I flipped one variable from zero to one, and now it still runs things. I actually don't see the output. There's actual output there. See it's still zero, so that means that CI will pass. It will still do assessment, but it will pass. And another example of custom command is running BDD tests. That's too fast, but so that was me running, let me just do it again. So this is me running a Hoyt test BDD, and that is running this BHA test, which produces screenshots and other things. So the tests are already handled, and it's all available for you here. And there's quite a lot of wiring going on there. So that's the end of the demo. There's a couple of more slides, and we have five minutes. There are challenges to maintain in DREVOPS. My main challenge is I don't have much community going on, so if you're interested in anything or you want to contribute, please do. Another thing about this is if you do not... Actually, sorry, I'm touching on the other slide. What's next? Support for composite create project. I really want to have that support. Different documentation introducing accessibility testing is a part of CI workflow, so you have all that out of the box. Galpo front-end, nightly dependencies and dates in CI, so you don't have to use the dependencies as a third-party service. More integrations. Pan-theme platform as H is something that I'm not working daily, so if someone is working, please help with this. And more CI providers is something actually people, lots of people interested in. Not the Travis much, but the GitLab part and GitHub actions is a new one for CI. How can this whole thing help you now, finally? So if you're a digital agency and you're looking for standardization of operating environment, you want basically all of your sites that you're working on as an agency to have the same thing, you can use some kind of a template. If you're a developer looking for best practices or just to know how some things work, you can just look into the source code, it's commented, you can see how things are and because everything is tested, you know that whatever the code there is not just some stale code, it's actually working code. And if you're a developer without knowledge or time, you can use that as well. So you just don't have time, you just don't, you don't have a budget for any of these learnedest things, you can just jump on it. All right, some inception thing. Now if you don't like, say, amazing images or you don't like some things, you can fork and update a couple of variables and you can have your own clone of DREVOps with your things and with importantly, the most important part, you would have all the testing infrastructure and testing things happening for you. So already handled. So you would have all that out of the box. Thank you so much. I actually want to thank very much kind of standing on the shoulders of giants here, like amazing guys, Acre guys. A separate thanks goes to Salsa, Alfred here. For Salsa has given the opportunity to trial these things on some of their client sides and allowed to, and of the government as well, GavCMS, SDP, they have allowed to try this as well and I think it was a success in some way. Thank you so much everyone for coming and for keeping up with the fast-paced talk. I think we've got four minutes left if you have any questions. Yeah, thanks for the initial talk. I think we probably will look at upgrading to this one from DevTools, which is what we're still using. I think that was version one. Given it's all open source, I can probably give you access to our Ansible plays for setting up a project in Circle, if that helps. More of a comment and a question. Thank you. Do you have any other comments? Questions? Using DevTools, we had the ability to do overrides per project without the need to fork DevTools. Can you still do that with DREVOPs? The question here is that there was a reincarnation of DREVOPs, the older version that allowed to maintain your own overrides, custom overrides. For example, if you want to remove some business logic from or you don't want to support LinkedIn or you want to add some other support, what are the ways to preserve that support when you do updates for next version of DREVOPs? The answer is yes, you can because every time when you run a Hoy update, it will bring you the files and you as a developer has to resolve which of the things that was brought in from your version, which you want to actually accept. Essentially, it goes like you go in your need client, whatever you're using, CLI, or CLI, or if you use UI, and you just basically pulling in the lines and just accepting what you want. That's the way to update it. I mean, it's still better than doing it manually. It's half automated in this way. You can try using all of this by just going to the site and reading a little bit about it. If something doesn't work, I'm going to be still here. Just come to me, please, and I'll try to help you out, even on your laptops, if you're interested. All right. Thank you for... I just got a one more real quick one. Are you still using Git Excludes? No. No, you've moved back to Git Ignore? Yeah, it's not such a thing anymore. Sorry, it was a very narrow question. All right. Thanks a lot.