 Okay, so thanks for being here, I really appreciate giving these talks about stuff that I've learned over the year and I like to have this kind of attendance, so thanks a lot. The talk I'll give today is kind of a discussion around automating security updates and it's a kind of practice that I've taken in my personal development workflow over the last couple of years and it's really paid a lot of dividends for myself and my team so you don't have to think about security updates period unless they break something so that gives us a lot more time to do some other productive stuff. So, I'm Albert, I have a little shop with a one man, let's say a two man shop, B-Cycle and we kind of focus on everything around automation, so automated unit tests, automated front end tests, automated upgrades, that kind of thing. I just like to hear a little bit more about you guys, like how many of you have maybe experimented with some form of automation in your security update or your Drupal update workflow? One guy, three, four guys, okay, cool. How much of your work goes into security updates? I'll give you my answer, before I started using this technique after, but is it less than 5%? Less than 5%? Okay, so have a good workflow, maybe 5 to 10? One guy, 10 to 20? Okay, a couple of people. What am I to understand here, that you don't do security updates or it takes more than 20% of your time? One or the other? Okay, so it does take some time to do security updates and so we want to minimize that as much as possible. How about, do you guys work with Docker locally, mostly, all of you? Does anyone not use Docker? You use stuff like MAM for Dev Desktop or stuff like that? Okay, a couple of guys, okay, Rupali, Matt? Vagrant. Vagrant, okay, Vagrant, yeah, Vagrant, another one. And who does automated testing on a regular basis, can be any kind of testing. It can be not everything, maybe just a few things, continuous integration, a couple of people as well. All right, so it gives me an idea of who you are as we go along. So, a typical Drupal code base, and this is probably stuff that's been around for like 10 years or more. The type of code base we have, Acquia uses this, I believe Pantheon as well, uses this type of code base where you have Drupal core, like every single line of code is in the code base. All the contrib modules are there, specific versions. You're going to have your custom code in there as well. All the third party libraries, you have the composer files and all that stuff. And you even have the generated CSS files from SAS. So everything you need to run your site is in the code base. So the first thing that can get ugly is if you're using patches. So there's all kinds of different techniques when you're using patches. You have the patches folder up at the root of your code base. And well, you know, when you come into a project and you're asked to do an update on something, you're like, okay, this patch is in the patches folder. But there's nothing that tells me that it was actually applied to the code. There's nothing that tells me that maybe some patches that were not in this patches folder were applied. So you still have to go through some kind of manual process to determine if you're really upgrading, if you're really upgrading what you think you're upgrading, or are you upgrading some hacked version of something that's going to fail. So the first kind of major pillar of what the approach that I want to talk about today is kind of a two-tiered process for developing or for building sites and developing sites. So on the one hand, you have a kind of recipe for what your site's going to look like. And on the other hand, you have your actual site. So the recipe might use for you old school people, Drushmake. You can use Composer. In my case, I like to use Docker. I'll explain later on why. But the idea is to basically have a recipe that explains what your site's assets are and not actually include those assets in your code base. Composer does this really well, Docker as well, and Drushmake as well. And the idea there is that you need to have some sort of a script in place to build your local environment based on these files or, and as well, when you're pushing these to production, you need to actually extract the results of this recipe and push it to production. Questions on that or does that make sense? Okay, this idea makes sense. All right. So I'll give you a quick demo of, I lost my internet access, but I hope it works anyway. So I set up this project here called the Drupal 8 Starter Kit, which is basically where I put all of my best practices and it's a project that I use that I'm hosting on GitHub that I use for all of my new Drupal 8 projects. And it uses this exact system where you have a recipe to build your site on one hand and you have your site on another. So this particular code base does not contain Drupal Core, does not contain the contrib modules, but it still works. And it works because we're telling this, we're telling, we have a recipe that sets out how do we actually build the, build the Drupal site. So the Docker file doesn't follow the best practices, I'm sorry about this, this is actually a really old project that I'm maintaining as I go along, but the fact that it's an old project says a lot about it because you can still download it today, run it, and you get the latest versions of every single, of Core and all of the contrib modules. So here is basically, in this particular case, I'm using the Drupal, sorry, the Drush 8 branch to download stuff and then I'll build the site using a script later on. You'll notice that I don't specify which versions of these I'm downloading and the system that's going to build the site based on this recipe is going to get the latest versions at all times. So we'll get a quick demo of that. Okay, so I'm going to run a script called ScriptsDeploy. If you download this, by the way, and you run this, it's going to work on your computer, whether all you need is a dependency as Docker, you don't need Drush, you don't need MySQL, you don't need anything else. You just run ScriptsDeploy. Obviously you're security-minded people, so look at the script before running it on your computer, but if you do run it, you'll get something like this. This is a starter kit and it's actually going to start building Drupal and it's going to download, here you see it downloaded all the stuff and it's going to use a starter database that's in the Git repo itself and it's going to build the project from scratch. You don't need to clone a database from anywhere, everything is self-contained and everything uses the exact latest versions of everything. That's the first demo I wanted to give you, this idea of splitting the end code that you're pushing to production on the one hand and the code in your GitHub repo which basically is a recipe for how to create these things and whether you use Docker or Composer or whatever else you use, this idea is something that's to me a huge time saver for all kinds of reasons. I'll give you one, another reason. When you do changes to code, looking at the diff, when you do code review, you don't end up reviewing changes to actual files or actual contrib modules. All you do is end up reviewing a change of the version of the contrib module you're downloading if you're not doing automated security updates and if you are doing security updates, there really is no change to the code base. Here this took 17 seconds, which is not too much to do a deployment. I've done it before, that's why it's all cached. Basically, if I can go and visit this site, I don't have internet access, I believe, but I'll still go and see if this works. Yeah, there it is. You'll notice that if you download this and run that command, you'll have exactly this code which uses the latest versions of everything. There's some dummy content there, dummy article one. This is a dummy article. This is in your code base. So self-contained code base separates the recipe and the actual code. All right, so just a quick kind of, for me, advantages of Docker as opposed to stuff like Composer and Drushmake and so on. Docker basically is a series of command line commands. We don't want the command line already. We don't want to learn Ansible or Chef or Composer or whatever else it is. This is stuff we know already. And Docker obviously is not limited to Drupal and it's not limited to PHP. So you can basically use it for anything and you can basically deploy anywhere as long as Docker is installed. And we'll see how, when we do continuous integration, that's something that's super important because if your project is self-contained on Docker and this command I ran earlier that created the brand new Drupal site does not have any dependency other than Docker. You don't need to have MySQL or PHP or the testing tools or anything else on your computer. So you can take this and run it on any machine which has Docker installed. That's great for continuous integration. Let's talk about the base image. So when we're talking about Docker we normally build our recipe based on something like the Drupal base image, the community base image which looks like this. Here it is on the Docker hub. So we'll go from Drupal 8 and then we add all the modules we need and so on. The problem with the base Drupal image, the official Drupal image is it gets updated with security updates two, three days after security updates come out which in some cases is fine but in some cases when you have some major security updates you want to do it basically a few, maybe an hour or two after they come out. So what I would recommend you guys do what I'm doing is using my custom base image and I'm building it through a Jenkins server and I'm building it every week on a Wednesday just after the security update period and it's automated. Now I'm going to show you how I do that in a second. So back to the... Yeah, so I'll show you the one I use. I'm using another one for another client so basically splitting out this... We've already split out the recipe and the code and another thing that I'd like you to think about is splitting out the base Docker image from your code and the base Docker images can be its own project that gets built every week and your code just basically builds on that foundation. Stop me if there's anything I'm saying that doesn't make sense or you'd like further clarification on it. So another quick demo I'd like to show you a base image that I use personally and I'd like to show you a base image that I use for all my clients which is a Stuart healthcare. So here's my de-cycle version of Drupal which is being built every single week after the security update window automatically based on the base image makes sure the code is up to date and installs Drush as well because the community Drupal image does not contain Drush so it makes it, in my opinion, not that useful. So you'll notice here if you look at the tags the automation that's going on here. So here you can see that the last update of my version 8 Drush 9 was two days ago, two days ago it was a Wednesday right after the security update window I don't have to do anything this image gets built automatically. How does it get built? I have a Jenkins server it's pretty easy nowadays with Docker to set up a Jenkins server and my Docker Drupal project here builds it every Wednesday and I get notified immediately if you're on the 15th of May for example if there's any problem building this image for whatever reason so I know right away if I'm monitoring this thing that okay the image didn't build for whatever reason I'm going to fix it. In general you'll notice that it does work so I kind of rarely have to think about this stuff it just works, the image gets built and I don't even think about it. I'll send another project that I maintain for a client which is the Docker image for let's set this a bit larger here So this is a project I maintain for a client I'm using from Decycle Drupal 8 which is the image I just showed you which gets built every week and based on that I run everything I need to run and these are the most important commands basically installing all of my modules that I use for my project and notice I don't specify versions of these so every week when it gets built it gets the latest versions if it fails it's going to stop and I'm going to notice right away and obviously this client image as well gets built every week and we'll notice that but a bunch of failures on May 15 so May 15 something happens to the code that kind of broke my workflow but since then I haven't had any problem at all how does my code look for my client's site all I need to do is run from that image and give it a tag and it always gets the latest one so I don't need to think about versions of different modules alright so the problem with Docker is once your computer has a version of the image in cache it's going to keep using it forever until you tell it to stop hence in the script slash deploy script that I showed you earlier I have a let's look at this one for example scripts deploy the first step of this is pulling the latest version of whatever base image I'm using so the developer process is basically run scripts deploy and start working so if there is a newer version of the image on the Docker hub it gets it the important thing here is it becomes a lot more difficult to not do security updates than to do them because your regular process you have to use the security updates if you don't want security updates you have to kind of hack it obviously and you know a lot of people tell me I don't want to do automated security updates because I want to test my code before it goes live I want to make sure that I'm in control of my security updates so obviously if you're in control of your security updates you have less of a chance of having unpredictable errors here's some of the errors I've encountered Drupal cores that changes API slightly once in a while if it wasn't used according to documentation might cause some errors but then again it's going to cause my project to fail if I'm under continuous integration a new dependency on an existing module if you used rush 8 it's going to fail because you're not downloading that module for example webform I think a year back got a new dependency my script and my Docker image started failing and I saw the error said unmet dependency so I downloaded it third one is the worst kind a change in some module causes the continuous integration server or the build process not to fail but causes your site to break I want to talk more about automated testing later on but to me there's only one response to this last one the first two you get notified in your CI process this last one if your tests are passing and your site breaks that means you need to add more tests in my opinion I've basically had this happened to me quite a few times and I'm not religious about writing tests I'm pretty lazy actually I don't write them that much but if I get a failure like this it makes it to production especially we're even staging I'll write a test to make sure it's the last time I get a failure I'm going to show you in a few minutes how I can set that up well actually I'm going to show you right now that was my next slide so the idea of end to end testing is that's exactly what I'm talking about to me there's very little reason to rely on clicking around the website to make sure it works at this point anyway because there's so many great tools for automated testing and manual testing is error prone too it's an extremely boring process and most people are going to are going to kind of not follow all the steps you can say how this works some other things should work too and they're going to go fast and they're going to miss stuff and so my recommendation for end to end testing would really be to start with something simple one thing I've heard very often with testing and I've been working with testing like for several years people often say well what I want to test is so complex that I can't test it I don't have the experience to write this type of testing my answer to that is well the reason I don't have the experience to write these complex tests is because you're not writing the simple tests to begin with because you're telling me this is too simple stuff it doesn't need testing and there's two problems with that approach is that first of all you don't gain the experience of writing tests and second of all even the super simple stuff like I click on a link on the front page it should not bring me to a page not found seems simple but once in a while you get these weird updates to Drupal modules at a core which can break your rooting and that's going to fail and so these super simple tests or what I call smoke tests are actually quite important and with that right now so I'll just head back over here to the starter kit if you've actually downloaded this you can play with this right now you download the starter kit Scripps Deploy is going to give you a brand new site on your computer and the CI if you link this to CircleCI which is a one click process CircleCI is going to run a bunch of tests on that site that you're deploying so one of those tests is an end to end test and I'll just show you what it looks like it looks like this now here's the main part of this basically I want to go to the Drupal slash user I want to enter my credentials and I want to wait for a selector that's called nav.tab this is pretty straightforward stuff you have the template for it here what's hard about this is all the scaffolding that goes around they're like okay that file is understandable it's straightforward I can copy paste it but where do I put it what do I use to run it what do I need to install on my computer or on my CI server to run these tests and the answer is nothing if you take the Drupal A starter kit and you run the end to end test.sh script because it takes care of everything for you it does everything on Docker containers so you don't have any dependencies and I'll show you that in a second so if you have downloaded this you can run scripts end to end tests I don't have internet access so I hope this is going to work so first thing it does is it uses Drush to update the password of user1 so that my tests can actually know how to log in because I want to, don't forget I want to test internal pages not only external pages I'm going to go to node1 in this case where's my okay node1 edit I want to make sure that that I can see something I want to take a screenshot of it so that if someone wants to view the artifact later on they can and if this doesn't work for whatever reason your security update to module XYZ broke this process this is going to fail and you're going to get a big red box on your continuous integration server let's see what this does with the artifacts which is kind of interesting oops sorry this actually saves the artifacts of what it does screenshots so here's the screenshots there are PNG files and it tells you exactly what it's doing it shows you basically what the test is seeing so if the test fails you're expecting say to see I don't know you're expecting to see the word title there and you're not well you can go and see what is the test actually seeing so this type of automated testing removes responsibility from the developers who find it really boring to do this stuff and places that responsibility on a machine instead how does this work with CI? here's my Drupal 8 starter kit continuous integration board which is Circle CI Circle CI is not rocket science you open an account you associate it with your project and you're good to go there's really nothing else to do and if you fork the Drupal 8 starter kit project you can get going in maybe five minutes so this is not rocket science so what does Circle CI do? it runs a bunch of tests well first of all it builds it creates a new virtual machine which has one thing on it which has Docker on it and because everything we do has only one dependency Docker it can start running tests so it starts by running scripts deploy well it starts by actually looking for does some linting it does some unit tests all that stuff and it runs scripts deploy make sure that we actually manage to download everything everything works and at the end it's going to run those end-to-end tests which are here and that's where it's going to fail if for whatever reason the module you downloaded doesn't work and because it saves it not only saves screenshots you can actually save the state of the DOM as well so I'm going to go one step further and I'm going to use an accessibility tester to actually make sure that I don't have that many errors in this case I'm telling my accessibility tester I don't want more than 25 errors and so I'll see the errors there are if for whatever reason I have more than that threshold of errors that I've told it accessibility errors it's going to fail so all of those tests are kind of built into my process and so I don't no longer need to actually worry about what changes security updates are going to make to my code that are going to break my site because if they do break the site that means you need more automated tests okay CI well these are things I talked about earlier super important to always make sure your process includes getting the latest version of the image because you we know that our image is being updated every Wednesday after the security update window so we can't assume that the cached version of our image which is in our computer corresponds to the image that's on the Docker hub actually I just showed you this CI stuff so I'll move on so what happens with when you have like super security critical issues like Drupalgedin it turns out and I've dealt with Drupalgedin using this exact process it turns out that Drupalgedin is just an update you just do a build and it's all automated you get like you know that your Genkin server rebuilt your entire Drupal you know it passed started to deploy if your tests pass you push the production and using this process fixing these major security updates it's just another update there's no change to your to your process you just run the you just press the button on your CI server it updates production and you're done so that takes a lot of stress away from the development team to know that they don't need to think about this stuff obviously if it's Drupalgedin you have to kind of keep an eye on it make sure it doesn't fail a little bit more than usual but the important thing is it fits into your regular workflow to finish off I want to talk about deploying to non Docker production hosts so obviously this is all Docker stuff we want to deploy to actually we want to deploy to Pantheon we want to deploy to maybe some other host which is not Docker native so when you on your local machine or on your CI server build this thing and test it and make sure it works the entire code base exists in your Docker containers all you need to do is a Docker copy from your Docker container to some other location temporary location and R sync that or push it to Git repo for Acquia or whatever to to deploy to production to mind my view that's a lot preferable than actually pushing some composer JSON file or what's that other thing Drush make to production and building your code there because it's a lot more airplane R syncing files is you're almost never going to have errors and you already tested those exact files you're going to push those files you already tested to your production hosts and I do this with Acquia all the time we do deployments several times a week all automated and we hardly ever think about it so conclusion a few kind of things that in my opinion anyway are kind of really you know if you don't do these your whole setup is going to break if you try to do automated security updates I think so first of all Docker I mean other stuff like Vagrant might work as well that you need to figure out how to install Vagrant on your CI server and it's super slow and so I personally can't think of a way other than Docker to have this type of workflow CI obviously wants continuous integration which is a super cheap way to catch errors fast build images any automated tasks you want to do Circle CI is free by the way for I think a thousand five hundred minutes a month or something so I've never actually paid for it so so continuous integration also is a no-brainer because otherwise you don't know when your failures occur a build step so that idea of splitting out the recipe on the one hand the Docker or whatever it is and the idea of the actual code that's running on your machine and the last thing I think which is a no-brainer is automated tests and automated tests can be the simplest you can have like one test just make sure you can deploy but run your test on your CI server as your site gets more mission critical run more tests but start with one simple test just to get into the habit of testing so that's pretty much I actually tried to keep it a bit short because when I presented when I present these ideas I often get lots of questions you seem like a tired bunch but at the same time you know you might have you might still have some questions and there's there's no question to go really uh so go ahead this so at what point do you run update.phpdm production process okay so the that's a good point actually when I'm deploying to a non-docker production host there's a there's a deployment script that I use which basically are saying everything from my docker host to my production or my staging server basically you want to do staging obviously and step two would be to run uh your update the drush fdb and you would also run drush c-i-n which imports your new configuration there's a I'll actually show you it's a really good question so here's an example of an update script for production that I use for aquea so I'll run CR I'll clear my caches this is a really simple one I'll run CR clear caches I'm going to run updb and I'm going to run c-i-m so what that does clearing the cache clearing the cache updb runs updates that your code might have on your to your database schemas and drush c-i-m is config import is going to take all the new fields and content types and views and stuff that you built and move them into production and running another script after that as a to show the login script to log into the staging site so you can click around so this is an example of a script that I use to update my production server and it's super simple can get a bit more complex anyway you can come see me later on if you want to know but in once in a while they'll run into issues where you know you have to like CRON can fail for whatever reason so in some of my sites I run CRON as part of the procedure on my staging site so that if CRON fails I know right away so sometimes you need to do that sometimes you don't I think there's entity update drush entity update you'll need that once in a while as well so depending on your site you can define with your update script that's the way I do it every time you every time you are saying you run that script exactly exactly there's a set of non-docker instances like in your case production is never docker in my case production is never docker on client sites I use experimental sites I use myself I have Kubernetes or docker instances on not production but it's basically experimental in which case I would run a script similar to that if there'd be no RC you'd basically push your docker image and then you would get your production site to get the latest version of that image it's a bit of a different process I don't who does docker on production here one guy can talk about that because I have kind of I'm not there yet docker on production so I would be very happy to hear more about how you do that stuff but just a mess glad to be here okay we'll talk more alright so that's a good question so back there yeah so if you're doing another thing for your production site how do you build the configuration with all the configuration for example every all the configuration that is supposed to be excluded though you mean like the development configuration and production configuration yeah so for example sometimes some configuration is not in the production right so right that's actually a really good question have you guys used config split before yeah yeah okay so you can do that you can use config as so you can use config split to basically decide what what what configuration files make it to production I'm going to show you one one other trick I use here when I deploy let me see if I can find it real quick because that's a really good question one second here update okay here it is so in my case I didn't in this particular case I didn't use config split this is maybe a little bit advanced but I still want to kind of mention it if you don't understand the code don't worry about it the idea here is that instead of exporting everything I know that approximately 100% of my clients don't want web forms to be configuration yet they are so this is something I use all the times like I'll export I'll just tell you I'm exporting my config locally I'm excluding all of the web forms throwing it all out and then I'm re-import and then I'm exporting the existing web forms on an existing project I'm combining those two together and I'm getting re-importing everything so it's a bit of a hack but it works I've been using this for years and it works perfectly once in a while you have let's say some clients say I like to run to do some views locally so I can say okay fine like all of the views that are configuration you should be preceded by the word in code underscore and if any view that's not in there is not going to be is not going to be overwritten so you can do some kind you can use config split you can use this type of scenario I really like to do scripting so this is the type of thing that I feel comfortable with but config split obviously is another possibility as long as you can from the command line make it work I've used config split as well and that's where it's fine yeah yeah so how this one probably I have when I'm doing updates patches like if those oh yeah patches are are no longer needed like modules committed to okay let's I'll answer that it's a really good question actually that was funny because that was like the initial thing that I'm thinking I'll make a talk about patches and I completely forgot to talk about it so I'll head back to the the docker file I showed you earlier now this is a docker file which basically is a it's a recipe to build an image so we're downloading all of our modules and at the very end here all of this if you're wondering by the way why I'm doing it like all on one line it's to make the docker file a lot smaller it's a bit hard to read though basically every single patch let's say I don't know this field group patch here so I'm going to run this so it's basically going to um sorry near these four lines so I'm basically going to say well field group in this particular project you know that the the official release does not work for me I want to have I need to have this modification this patch so I'm going to download it curl oh download it and then I'm going to patch so I'm going to change that module and then I'm going to delete that patch I'm going to move on to my next patch and so on so what I understand from your question is what happens when those patches get either they're no longer applied or they get merged in or whatever or there's no patch for like a new version of the patch so there's two two different questions there the first one is if it fails for whatever reason it's been merged it no longer applies for whatever reason because the underlying code is different or something this line is going to fail and I'm going to see in my CI server I'm going to see this here if you're running Jenkins you're going to see this in this your console up is going to be red and you're going to be seeing the problem down here it's going to be saying well patch here it is so it's going to be saying this this is going to fail apply the patch you're going to see it you're going to see this patch no longer applies here and that's actually a really good question because as part of your workflow as a developer it's going to be your responsibility to say okay the web form patch whatever is failing it's your responsibility to go into the community to Drupal site and to the Drupal.org look at the issue understand why it's no longer applying debug it submit a new version and so on and you know what often happens is because I need this process I'm very often among the first people to realize the patches no longer apply so I have a lot of activity on Drupal.org where I'm like okay this no longer applies here make this change and you learn a lot about modules about the underlying technology when you have to deal with that stuff and that to me is super positive but it is a time consuming thing so you have to kind of be aware of that if you use this technique does that answer your question? yeah somewhat yeah you don't see too missed well yeah I was hoping to you know something that would have you know the word so I wouldn't deal you need to go through you know you know 25 comments of execute okay how about this determine whether it was really needed let's say for whatever reason your patch fails and you're like I don't want to deal with this but it was passing on version 3.21 at this point you can go into your composer file and say well you know for metatag I no longer care about security updates I want to stick to 3.21 because I don't want to deal with figuring out why the patch no longer applies you can do that as well and offload that responsibility downstream that I mean if you're in a rush or whatever for a deployment you can do that as well for sure hey when is composer running in the docker file well composer is running basically to oh sorry there I didn't see that okay so like all the like all the massive pile of stuff that you need to actually run through Blake that you're just I'm doing it in the docker yeah if that fails I'll get notified so you are using composer to fetch the module well I'm using composer here because I'm using as a base image the tag 8 rush 9 but since you ask you're not using Drush 2 fetch the modules well Drush 9 no longer supports Drush DL but Drush 8 does so if you look at my starter kit I no longer I haven't upgraded to Drush 9 yet I'm using the 8 tag which uses Drush 8 which supports Drush DL so you can see here I'm using Drush DL so you can use the tag whatever tag you want Drush 9 with composer or Drush 8 with Drush DL I love the patches what one of the things that come closer to the chip that you see is that what you're doing oh yeah the patches what you're doing what are you doing curl in the routine and stuff I kind of like the command line I mean you can do whatever you want if you like to put it in a composer file I don't like composer I know I'm I know I'm the only one in the world but I kind of kind of bugs me I don't kind of feel it you're not the only one you're not the only one it's not okay I I think it's to be honest so I'll use it if I have to but I don't see why wouldn't you just use the command line curl directly I don't see any problem with that you probably had a problem you had a problem no can I see your directory structure where do you keep your locker remain with you yeah sure so obviously you can download this as well but I will show you the directory structure which makes it a bit larger because I know that when I'm sitting in the back and things are small I don't like it okay so here it is so CircleCI is basically all of the continuous integration stuff and that's super simple it's basically just running one command on a new image with Docker a new VM with Docker installed then you have all the Drupal stuff like for example the custom modules and notice that the directory structure so you have Drupal custom modules what is that this is not Drupal directory structure how does that become a Drupal directory structure well this is you know this is Docker so in the Docker compose file I'm going to say well I want you to map Drupal custom modules on my local directory to the actual real place it should be on the container so that's basically how I set it up but you can put your directory structure however it feels best to you and then just map stuff in your Docker compose file the most important thing here is the scripts folder which contains all of the different scripts that you might want to use like for example to export the configuration from your from your local container your end to end tests your deployment scripts your unit tests and so on how do you pull config from so do you do you ever need to pull config from production for example the web forms you were describing well what I would do is I would pull the database you do you use the database and then you export locally you configuration export locally from with load database yeah in my case I never actually want to have those web forms as config you just completed I just assume their database only because that's what that's how you treat them okay yeah exactly and then everything okay and then everything else you define it as YAML and Docker and then it gets pushed up and content is coming from the database and you're just dumping that loading it locally in Docker yeah yeah and I have I just want to show you something real quick here I have a get database from stage script which basically gets my database from stage not production yeah I don't want to touch production I don't want to have these like intensive database you know exports from production okay well anyways but you can get at least for this client that's good enough yeah exactly like basically the idea what I like about this approach that everything is scripted is you do whatever is right for you there's no the tool is not telling you what to do it's more of an idea of process the point is that whatever the logic is you're not doing it by hand exactly you're out of me which is what I do back here yeah maybe my question was like I'm just a bit confused I feel hopefully you helped me so from the development stage you with this script you push to production right yeah well I would push to staging normally yeah production or whatever but to me it's development to production why is why are you taking this approach to me I don't know if it's a best practice instead of using for example just get and merge all the changes as for the configuration that's why I meant probably I mean like if you get instead of R-Sync if I want to move and see production I actually use Git in a lot of projects as well like Acquia requires you to use Git this is another this is an example Acquia has a Git repo to use Git so you don't have a choice to export everything from your local container to the Acquia Git repo and then run a Git you know Git tag and push it to Acquia and you don't really think about that stuff because it's all done automatically you don't actually ever touch that Git repo but it does use Git other hosts are going to accept R-Sync yeah all right let me just see the timings because I think I'll take questions for sure if anyone is anxious to leave and get coffee I won't be offended because the talk is officially over but for those of you who want to stay I'll take their remaining questions with pleasure so over here yeah so I don't use not production but you think you might migrate the database like this would scale off in production and then migrate the database like you might very good fix on the database it's just migrating the database yeah to scale up in production like the other huge database doing it on the menu well two answers to that first of all yes I use it with a two-degree database with no issue whatsoever and second of all I tend to not use the production database all that much I tend to reproduce the functionality I need in my starter database with this idea of a starter database where I have this new content type XYZ as I deploy that I'm also going to deploy at the same time examples of what XYZ knows what it would look like and I would push put those in my local database and then I'd export them to my local git repo so I always have this kind of starter database which contains data which is representative of my production database but I can have developers play around with I can run automate unit sorry test and accessibility test against so I don't actually that often clone the production database but it does work fine in my experience it has scaled up very well yeah, personally yeah, we're good hello I was going to say that BLT or operated BLT does something sort of similar where you have sort of a source repo and it calls the other thing an artifact repo and I found those terms to be really useful like a source repo and an artifact repo when you're trying to refer to I've got this one repo that looks nothing like the nothing like a production that's a source one that does actually look like a Drupal type artifact repo yeah, that's actually true built in artifact in BLT so BLT my understanding I haven't used it but my understanding is it's a product that's acquiespecific whereas I'm trying to build something that we can use anyway it's a terminology but the terminology of artifact is actually the terminology that's used when we're talking about builds absolutely those terminology are very and I should have used them in my talk actually but that's a really good point the source is really your recipe and then the artifact is the result one of the reasons I haven't used that terminology perhaps is that people who are not coming in from a background where they've thought about this stuff that kind of can sound jargony of this in some instances but it's a good idea to get used to using those terms absolutely alright so I'll I'll keep out some questions any questions you guys sadly I really don't mind you're not going to be affected you can make do I'm just asking do you use something like stagefile proxy for assets yep I use stagefile that's a really really good because you can have like a gigabyte of file and you know what I don't want to download them so stagefile proxy works perfectly yeah absolutely yeah what's part of it these are your successes right human beings are hard to automate there's my answer human beings who like who come to an agile screen planning they call you the next day with five new tickets and you're like those human beings those human beings thank you