 We're doing continuous integrations for teams. It's probably obvious, hopefully, from the slide here. But we're gonna go ahead and get started. My name's Drew Gordon. I am the Director of Agency and Community Outreach at Pantheon. If you're not already familiar with Pantheon, we are the best Drupal optimized hosting on the planet. And we combine that with some really great tools to make teams more effective in the vision. If you wanna hear more about that, you can come down, we've got a demo booth, T-shirts. And you can get a hold of me these ways. So, D. Gordon, we're on the Twitter. That's my email, and. Hey, I'm Rob Baylis. I'm the CTO at Last Call Media. If you haven't heard of Last Call Media, we're a small Drupal agency in Western Massachusetts. Personally, I spend a lot of time working with organizations on their process and their development workflow. Cool, and for those of you just coming in, it's okay. It's friendly crowd. There are seats a couple of them in the front, so make your way in, no worries. So, we're talking about continuous integration and just sort of like level set. Continuous integration is not something that was invented with the web. Continuous integration is a computer, sort of like best practices of all over many years. We recently, relatively recently, as a community of practitioners, have picked up continuous integration and said we should do this for websites. And so in the context of websites, as Josh Konig says, it's really the process of taking your live content and configuration and combining it with the latest dev code, the code that's in process, and putting together those quickly and fearlessly and being able to see the results and know what's gonna happen if you actually get that stuff live. So, that's really all, like we have lots and lots of jargon words. When we say continuous integration, that's really what it's all about. And continuous integration assumes a couple of things. One of those is that there's some time compiling or building. Again, this has been common in computer science for a long time. And those things take a lot of time. And as XKCED reminds us, it's a delay between you going ahead and hitting something and then the computer's going back the results. And they capture quite well. There's some downtime there. Time in which you're able to legitimately claim you're working while sword fighting because the computers are doing the work for you. You might choose to spend your time talking to your project manager or catching about Slack or other things, but also legit. So, I just wanna talk a little bit about how we got here because this is not the end of the evolution of continuous integration and all the tools and the things that we're inventing to make our jobs easier. So, just looking back a ways, once upon a time, web pages were very simple things. Web sites were made of very simple things. It's HTML, CSS, some JavaScripts and some assets. And that's where we started in the 90s. We're building sites that are just these things. This was all of the layers. And those sites might look like this. And for those of you who didn't see Todd Ninker's presentation earlier, like apparently this is, he also featured this site. This site still exists. It's called Space Jam. For those who remember the web of the 90s, it was a glorious time of doing really interesting things with images and whatnot. And anyways, this is an old website. Still works. HTML, CSS, JavaScript, some assets, et cetera. However, when you are managing a site that has a lot of those things, you start running into problems, right? And so Rasmus Lairdorff invented PHP to account for some of those things. It was basically a tool for helping process forms as well as to do some templating, right? And this PHP then got involved to help build all of these other things. And then that sort of kicked off the development of content management systems as a thing. So PHP, the content management system is simply something, it's PHP talking to a database, it's got some source CSS, it's got some source JavaScript, some source assets, might resize those, might optimize the JavaScript, et cetera. And the content management system is then doing a build for us. It's a layer of abstraction that is managing all this stuff for us. And it's kind of doing it on demand though, right? There might be a caching layer there. But this is the start of the journey that we're talking about today, really. Goes all the way back to the idea of PHP as a thing. Because we add more tools to make our lives easier. So Droshmake, SAS, CoffeeScript, and other things like that get added to our tool belt. And we start adding yet another layer. So you have a Git repo where you have your Droshmake, custom code, your SAS is happening, CoffeeScript is there, other things as well perhaps. But it starts to cause some confusion. And for those of us who maybe lived through this, sort of philosophically you run into some questions you need to answer, like who owns the CSS, right? So if you're compiling, so you're working with SAS because it's helpful, just better than raw CSS for many reasons. What happens, are you storing the artifact that is the compiled CSS? And what if I use a different build tool than Rob? Or somebody else on the team's got something slightly different. You start getting into like weird spots and like how do we handle this as a team? What are our rules and whatnot? And I think philosophically we arrive at the spot like what we're trying to do with version control is capture the knowledge that is required to build the system. That's like the crucial thing that we're trying to do. And then you let the robots take it from there and the systems take it from there and use that knowledge and go ahead and create the system after that. So that, for example, is what Composer does. So we add tools like Composer and Power and NPM to manage these dependencies. So you can simply declare, I want Drupal 8. I want this module. I want this version of this. And then system takes it from there. Again, extends the changes a little further. So now we got local development talking to a Git repo. There's a build tool somewhere. The order might be reversed, for example, in your particular setting, but that then creates a CMS and eventually that builds a webpage. And it can get a little overwhelming though. Like we've already had a lot of things that we've had to learn to get to this point in web development and we just blah a lot more. And so we're gonna be showing a lot of code and we've got a Git repo that has a lot of sample stuff and you'll be able to check that out. For those of you who are just sort of early into continuous integrations or maybe taking baby steps or maybe you've got a process that you're hoping to improve, it's okay. Because as Marie Curie reminds us, like nothing in life is to be feared. It's only to be understood. So again, we'll be showing some things here. In order to understand them, sometimes it's helpful to just take a step back. Like why are we doing all this again? Like that Space Jam site looks pretty good. Could we just get back to the basics? And what continuous integration really allows us? There are at least two ways to sort of think about this I think. For your clients, for somebody who owns a website, a good reliable continuous integration practice makes frequent updates possible and much lower risk. And that means work can get done faster and your budget can go further. As someone who builds websites for others, a good process gives you something you can share across multiple projects, a chance to fix problems permanently. So for example, as an agency you decide, look we just wanna make sure that we're something as simple as code styling. Always following code style. We're gonna build a process that always checks for that. It's a nice easy sort of baby step into this whole thing. Because of what you're doing, because what you're doing is reflected in your clients sort of like confidence and rapid moving and agility of feature development and other things like that, the word of mouth goes up, their trust goes up. That's really valuable for anyone who builds websites for a living. Because ultimately, we get future projects because our clients recommend us to others and they trust us with their future work. And so a good CI system is really foundational and a real long-term competitive advantage. And it also in some ways allows us all to take on riskier projects. Because if you know you have a good safety net that's gonna be there, sometimes you know you get a project like, ugh, I'm inheriting a site that sort of has some code smell to it or there's something that's not quite in our comfort area. I would like us to go this direction. Having a set of tools that helps you test your work, know, deploy with confidence can allow you to sort of take on a project like that and be more confident and just open up that sort of new line of business or meet that new client and have the confidence to be able to do that. Because again, in a complex ecosystem it's not about being the strongest or the smartest or the most talented dev but what you need in order to survive and grow and thrive in the real world is to be responsive to change, to embrace it, get better and improve. So this is what, so we're gonna be showing a bunch of code here. This is sort of like the scaffold that we were painting on top of. So this happens to be the normal Pantheon workflow. You have a dev test live, sort of process built in and then the chance to do different kind of branches that feed into that. And on top of this, you can do some really cool stuff. Yeah, so I just wanna talk about kind of the workflow that we use at last call and how that's slightly different from the normal Pantheon workflow. As Drew mentioned, one of the really nice features of Pantheon is having this multi-dev available to us. How many of us have used Pantheon multi-dev? How many of us love it? Who haven't used it? Cool. Yeah, we do too. It's really nice to be able to create sort of an instance of a site that's totally away from the production pipeline. You don't have to have any crossover or commit anything to your master branch that you don't want to make it to production. So it's a really powerful idea, but when you combine it with CI tools, really you need to be able to integrate with an external repository. So for us, what that means, this is our sort of feature workflow. If we have a new feature we're building or even a bug fix, it's gonna go into a feature branch. It's gonna start with a developer, kind of making the change and then pushing it up to GitHub. We have a tool called CircleCI, which does our continuous integration for us. And ultimately, we're gonna be sending what's called an artifact off to Pantheon. That's gonna go up to a multi-dev instance where our stakeholders can review it. And again, one of the really sort of powerful tools here is being able to take that database and files from live back to the multi-dev in order to display the most recent stuff for review and also back to CircleCI during the build. When we push the production, it's gonna look pretty similar. We're just gonna end up going to the master branch on Pantheon rather than to a feature branch. So really, what you end up with here is two repositories, and this is sort of a key idea to this whole CI concept in Drupal. Your source repository, or what we use as GitHub, is gonna only contain the custom stuff. So that would be the site-specific modules, themes, whatever, and then the knowledge of how to get all the other stuff. So Drupal Core isn't even included in our source repositories. But we have a composer.json that specifies the version of Drupal Core that we need, and CircleCI is able to pull that in for us. What gets sent up to Pantheon, on the other hand, is the full site, so all of the code you're gonna need to run it. So your compiled assets, your core, your contrib stuff, plus any PHP libraries. And so this really answers the question of who owns the CSS, right? It's CircleCI, that our build tool is in charge of creating this for us. So I just wanna take a look at sort of a fictional example. This is one of our real clients, Albright Knox Art Gallery in Buffalo, New York, and they have a beautiful look in Drupal 8 site that we took on not too long ago. We didn't actually do the build on this site, and so it's sort of unique in that we're taking on a project that we are not super familiar with. And of course, one way to mitigate the risk of taking on a new project like that is to add testing around it. So let's imagine that this client has a change that they'd like to make. They wanna add a border below their exhibitions block on their homepage. Pretty simple change, really. This is what they want it to look like. Kind of tough to see that border, but it's there. Yeah, and it's improvement. Yeah, and good design is subtle, so whatever. I'm a developer, by the way, and not a designer. So anyway, first step is gonna be creating a GitHub issue for it, and issues kinda outline what the task is. I'm sure we're all familiar with them. So we placed it in a picture of what we want it to look like, and sort of a spec on what needs to change. As a developer, we're gonna go into the code base. We're gonna make a new CSS rule. We're gonna add a border to the bottom of that block title. When we save the file, we're gonna have our SAS recompile for us. And then locally, the next step, of course, is looking at the change. So if we reload the browser, we can see that there's that border there, and everything looks great, so let's ship it. So, because we're using feature branches, we're not gonna commit to master. We check out a new feature branch that's labeled after the issue, and then we're gonna add our changes and kinda push them up. This lets us maintain that isolation, and over on the GitHub side, what we're gonna end up with is a pull request. And I suspect most of us have worked with pull requests at one point or another, but we really love them as a way to kind of isolate code and also to allow non-technical stakeholders to execute merges, even if they need to. So in this case, we're just creating that pull request. We're gonna leave a little description. And creating this pull request is going to run our continuous integration suite for us. So we create it. And in this case, because this is a magical demo, it instantly failed, wouldn't do this in real life, but it would fail eventually. And so- That would be the sword fighting step right there. Yes, right, we missed a whole lot of sword fighting. But then when we take a look at this build, we can see what failed. And so as the developer, we're like, oh no, there's a design deviation. Circle says there's a design deviation. What does that mean? So we can go to this artifacts tab. We can open up the report for a tool called Backstop JS. This is a visual regression testing tool that basically snapshots the site. So on the left, what you're seeing is the reference or what it's supposed to look like. In the middle, you're seeing what it actually looks like. And then on the right is the overlay. If we open this up and slide this slider, we can see sort of the difference in real time. So we can see that we have borders that we didn't plan on having. So as a developer, I now have all the tools that I need to fix this. I know that my CSS selector was way too broad and it's pretty easy to fix. So really what we just watched was five, 10 minute turnaround time on something that may have taken a day to sort of for a QA person to come back and look at this ticket, if we send it up to a development environment somewhere and then they would have to write what happened, what I expected to see, what I actually saw. And so we're turning that around in five minutes. So when we talk about agility and being able to adapt, that's really powerful, having that fast feedback loop. So really all we're doing here is pushing the work to the machines. We're taking that super boring stuff away from the developer and away from the QA person and just saying, okay, we know that this footer block, which is what we were seeing there, is prone to breakage on this site. So let's snapshot it, let's add a test and then it'll always work the way we expect it to. Before I really dive in on the build, I am gonna get somewhat technical but I wanna talk about a tool that we use internally. We use Gulp and this is a tool that a lot of agencies use. The way we use it is maybe slightly different. We have a set of top level tasks. So we have install, check, build, test, and then watch that are the same across all of our projects. And then down below those we have individual tasks. So install is actually composer install and bower install. Check is actually those four things. And so what this means is that when we switch back and forth between projects, we can have the same top level Gulp commands for everything we do. And then our CI process also has those same top level commands. So this is a really nice tool to enable that kind of fast contact switching between projects and also to keep our CI builds more or less consistent between all the projects. It also wraps up a whole lot of complexity. So as a developer, I'm not really gonna need to know what the PHP unit configuration is. I'm just gonna need to know where to put my tasks and how to run it, just Gulp tasks. So it's a nice system and we're gonna see it a lot in the circle configuration. So I should just point out here that if you're having trouble reading this or just wanna come back to it later, this is all available in a public repository, which we're gonna link to multiple times. All of this stuff is open source and it's sort of the platform that we use to get started with in UCAN2 if you'd like. So we're looking at the circle.yaml file and if you've used a system like Jenkins or Travis CI, you're probably familiar with this kind of concept. You're configuring the build in a file and what we're really seeing here is the dependency section. So this is where we start pulling in all of our dependencies. So you can see we run the two that we really care about here are Yarn install and Gulp install. Yarn install is actually gonna pull down Gulp and all of those dependencies and then Gulp install does everything else. Really simple process. The next step in the build is gonna be setting up a Drupal site and to do that we all know we need a database. So we set up the settings.php and then we reach out to Pantheon and grab a snapshot of the live database. If you had sanitizing requirements or something like that you could grab it from somewhere else too, pretty easy. And then we have a Drupal console command and thanks Jesus for all your work. This is a chain command that basically runs. Yeah, let's give him a hand. Yeah, this is a chain command. And this is nice because it allows us to in a YAML file specify all of the Drupal console commands that we want to run after the database is imported. So that's things like configuration import, update the database, that kind of thing. So it's really nice as far as being able to share code between projects. Then we're gonna proceed into tests and really what that means is run the build steps and then run our static code checks and then run our full on tests. And we're gonna see all this in action in a moment. And then finally, assuming everything goes well during those other phases, we're gonna deploy. And we have two different workflows. And again, those really just correspond with whether we're going to multi-dev or to master or to production. So the only difference here is that in this bottom one multi-dev step, we actually create a pantheon multi-dev environment on the fly. That's kind of powerful. As a developer, I can just create a new feature branch, send it up to GitHub, and assuming all my tests pass, I have a fresh environment for it instantly. And just like a word on this, like so this is a fair amount of code and it looks maybe in a format you're not familiar with or some of the like, wow, that was a lot to parse. Kind of the point of this a little bit is you write this once. Now you can use it across multiple projects and you're giving the machine the tools to build the thing. And you might have to look up the syntax to write these things, but start with a repo or add your own things. You really only have to figure it out once and then you can offload that from your brain into the scripts. Exactly, yeah. As a developer, you don't need to worry about the nitty-gritty of how this is working, 99% of the time, so. Okay, so we're gonna actually watch a little video here. It's video time. And what we're seeing is a successful build. So I went in and I fixed the specificity of the selector so this test should pass now. And this might, I don't know, we'll see if this gets boring or not, but we're just gonna watch this happen in real time. So first thing, there's a couple of house keeping items that CircleCI needs to do. Just kinda like start up containers for the CI job to run in and then to actually run the get checkout. So it's gonna clone down the stuff from GitHub. And I should note that this happens through a web hook so this happens almost simultaneously as you push the GitHub. Next thing, we're gonna set up Docker and we use Docker for the backstop tests. Again, I have to give a shout out to FFW who I believe is maintaining the doxel project. We use their container there. Next, we're gonna be setting up the versions and really that's pretty straightforward. One nice thing is that in alignment with Pantheon, we can actually push a PHP version change through both the CI system and Pantheon so we can set like if we wanted to go from five, six to seven, we can change our PHP version in both CircleYAML and PantheonYAML and push that up and have it test and deploy. So now we've proceeded on to the dependencies installation. Our entire yarn install process took about three seconds, pretty quick. And now we're watching Composer happen which is like watching concrete dry. We are sort of backed by the cache here so CircleCI has a nice feature where it'll load things from the cache if they're there so it's not quite that bad. So now how often do you in real life actually watch this? Never, never. Yeah, this is something exactly. You really don't need to worry about all this stuff. All you need to worry about is whether it turns red or green on the pull request. So this is normally time I'd be stored fighting. Or making coffee. Whatever, yeah. Work, work, work. So at this point you can also see we're installing some development dependencies like PHP unit and those will be stripped out before we get to the final deployment but this is just kind of the initial setup. Now we're into the database stuff and we're grabbing that back up which is pretty quick. Yep, nine seconds. This is not quick however so we're gonna transition to the next video and so this video just picks up where the other one left off about five minutes later and that database is finished importing. This particular site has a really big database for whatever reason. So now again we're running that chain command and on this site it's gonna set up stage file proxy for us so we don't need to grab the files as well. It's gonna turn off cron, it's gonna do a couple other things but really you could do anything that you would do in a Drush or Drupal console command in there. And then we're proceeding into build. Build is pretty quick and you can see the sub tasks running there and again we're just executing build, whatever that means for this site it's happening. Now we start a web server to run our tests off of, cache rebuild, that's fun. And then we're gonna proceed into the static code checks which are fairly quick. For static code checks we're gonna have the composer validation, the ESLint which is JavaScript and you can see we're gonna have a couple of errors and then PHP CS which is a really rigorous tool that I kinda hate but keeps us honest. Now we're on to the test phase. One nice thing about Gulp is that it's gonna run tests or it's gonna run all of its tasks in parallel. So we start up Behat, backstop.js and front end performance testing tool called PhantomOS all at once and those execute simultaneously. So even though those are three really slow tasks they kinda run parallel. This is the PhantomOS output and it gives you a lot of data about how your front end is performing. And now we're watching backstop run. And one thing that's cool about backstop is it's gonna snapshot your site. This is the visual regression testing tool. It's gonna snapshot your site at the breakpoints that you give it. So we've said give us phone, tablet, and desktop breakpoints and then we've given it a couple of different pages to capture elements off of. And again this stuff, once this is set up and configured you really never need to see this again. And one thing we'll talk about in a little while like for those of you like maybe pulling in like does anybody here have to feel like they have a CI process that looks almost the same as this? All right and are there many like aspirationally I think this is pretty cool but yikes, how do you get there? Yeah, very brave soul. I think there's more than one. So Rob, did you start with this whole process? Absolutely not. No, right, so your first foray into this is set up a circle account and maybe do some like code checking some static code analysis or something like that and then on the next project, the footer breaks all the time because the CSS selectors are just archaic and our team didn't do it, which we don't know so we're always breaking it. So we're gonna add that one thing that we heard about at that one presentation at DrupalCon, okay cool. And then on the next project we're gonna add another thing and another thing. And eventually you get to a pretty full feature thing and it's not done yet, right? There'll be more that happens over time. But you don't have to jump all the way to this. It's just showing you a particular combination of these things and what's possible. Exactly, yeah, I mean this is our solution and maybe what makes sense for your organization is a whole different set of tools. But we just wanna give you a taste of why this is valuable and what it would be used for. So what we've been seeing going on in the background here is essentially all the tests pass, everything looks good and so we re-installed the composer dependencies and sent it to Pantheon. And really all that is is just a clone of the Pantheon repository, copy the code over and then commit it and push it up to Pantheon. So again, two totally separate repositories only kept in sync by Circle here. And I don't think we need to watch the whole master deployment video, it's basically just the same thing. So now that that's been sent up to Pantheon to that multi-dive instance, we have a new URL here. It's gonna be p23 after the branch name and we can see that our border is in place, great, awesome. Let's send it off to the stakeholders for review. Everything looks fantastic. And so just to kinda recap what we did here, we're at this phase and we've gotten stakeholder sign off. I'm gonna assume somewhere along the line we probably got code review on that pull request because we always do that. And next is gonna come this production flow. And again, we're not gonna step through this stuff super detailed, but really it's just a mirror of what we did there except we pushed to the master branch and then it's a click button deployment to go from dev to test to live. So as a developer, this is a really nice workflow because my involvement in it is so minimal and I really have clear indications of when I'm needed on the task. If a build fails, I'm needed. If something is ready to be merged, I'm needed. But otherwise I can go hands off and it becomes someone else's problem. So let's kinda take a step back, look at the lessons we've learned here and how we would recommend implementing this for other people. I think it's important to know, as Drew was saying, that this is an ever-evolving thing. It's never finished, it's never gonna be finished. It's basically, you're making an investment in your pipeline. Your pipeline becomes a product that you, as an organization, whether you maintain one site or a hundred sites, invest in. So over time, you're gonna wanna add things, you're gonna wanna change things and that's okay. You're not striving for perfection the first time. Don't let, I screwed this thing up earlier, don't let great, good. Perfect. Yeah, whatever. Don't let the one be the enemy of the other. Perfect. The enemy of the other. Yeah. So next thing, start small. As Drew was saying also, you gotta start with whatever makes sense for you. For us, it was B-hat testing several years ago. We wanted to document those specs and really get a test suite sort of written up around the specifications that were happening, that were coming out of our discovery process. So we got that in place. And then we had a client that wanted, they had a strict front-end performance budgets. We looked into tools that would do that. We ended up with a tool called PhantomOS. And we got that wired in. And then it was, what was next? Static code checking. And then it was backstop. So just one piece at a time. And you really wanna make sure you're focusing on the business value. Cause at the end of the day, we do like cool tools as developers, but also you need to make sure that you're improving the value that you're providing to your clients or your stakeholders, whoever you work with. And then kind of the last thing I'd say is if you're an agency especially, make sure that what you're doing is reusable. You don't wanna be creating something for one project that you can't take across your entire infrastructure. It's really important to have consistency between projects and starting with something like a scaffold allows you to do that. Allows you to bring that consistency across everything. So to that end, we have two different kind of offerings here. We have the scaffold that we've kind of been showing off. That's all the circle configuration, the gulp tasks, as well as a composer managed build. And then we have sort of the less opinionated version of that that you can adapt to whatever you'd like, which is Pantheon's Drops 8 repository. That's about it. That's what we wanna share. So as expected, as we were hoping, we have plenty of time for questions. So if anybody has questions, please we ask that you come up here to the mic so that people who review the recording can hear the questions as well. But anybody has questions? Please step on up. Hey. Nice shirt. For the audience, he's wearing the doxel shirt. You are using my image, so that's great. Thanks. Awesome. It's great that it works for you. The question I had about Backstop, where do you store the reference images and covers of work? Do you keep it in the grid? Do you keep it somewhere else? How do you, where do you pull it from? Sure. Before, could everybody hear that question? No, I was not. I was gonna say, I don't think that mic is on. It's on. Hey. Okay. All right, so do you mind asking that again? So my question is with Backstop reference images, where do you store them? Do you keep them in the repo or somewhere else? How do you pull them to run the test? That's a really good question. And so what's worked for us so far is keeping them in the repository, actually committing them. And that way they get version controlled. And during the PR review, we can see if there was a visual change required, we can see exactly what the before and after was. Okay, thank you. Yeah. I also have a question about the visual regression testing that's, when you make a change like the one that you showed, you expect some visual changes. So how do you go about making sure that it doesn't cause an error and stop everything and then require special exception later on down the line? Yeah, so when I made that change, I was not expecting the change to the footer, which is what we were seeing. Right, but how do you tell that I do wanna change in this place? Yeah, so this gets back to the question of the reference screenshots. And so the reference screenshots are the ones that are committed to the repository that contain, it was the screenshots on the left there. So what it should look like. So as a developer, I can run those backstop tasks locally using the gulp task. And I can run it with the rebase flag, which will regenerate those reference screenshots for me. So, and that's what I'm saying, as a developer who is reviewing, when I take a look at that pull request, what I'm actually gonna see is the new screenshots that were captured. I think so, but what he's saying is there was some visual difference, that was intended, it was in fact a design change. So capturing, so that's gonna throw at least the first time through a false negative. How do you handle that, right? Because there was a change to, adding the border was a design deviation. So how do you account for that in process? So if we had had, in this case I'm sort of cheating because we didn't have a test for the actual current and upcoming exhibitions block, which is what we were expecting to see changed. If we had had a test for that, we would have needed to regenerate the reference screenshot before we push to avoid that false positive. Okay, so are you expecting to fail the first time around or is that? Nope, okay. No, yeah, there's no intended fail there, was just, yeah. Other people might have to shorten it. So I like a lot of what I see as a manager and as a process fanatic, it's all great. One of the pushbacks that I get when we talk about this in our workflows, first of all, we internally host our sites. And your example is the perfect pushback that I've gotten. I'm just changing a border, it's a specific selector and that's a lot of overhead. When we, all we have to do is recompile the SAS, upload the change theme to the dev server and I'm done. So what point is that too much overhead? And why do I have to wait five minutes? I presume if you're doing all those tests and it's just a really, I mean, are there breakpoints? Are there a time where you don't need to do a full test regression test because you changed the border on an image? Sure, when you don't care about the result, I think that's the answer. I mean, it's, it really, no, but it really is. Well, you could make that argument. Yeah. But by and large, that's really not a problem. I get what you're saying. But in the, but what we've seen the day to day, when we change a single class selector, it's not gonna ripple out to other pages. Sure. By and large. Yeah. Well, although, all right, who here has ever written a CSS selector that they feel, yeah, like they, that had unintended consequences? As a byproduct of that, how many of us have written overly specific CSS selectors like blub, blub, blub, blub, blub, blub, blub, blub, you change it together? So, I would say, like, I think another answer might be, you start small. So, like, don't implement a testing system if you're in maintenance mode on a site and what's likely to change is adding a border once every three months, or changing a border once every three months. Start with, you know, your next project build. Do, you know, like, gather your specs in cucumber-like format and start using B-hat. So, don't, I mean, like, retrofitting all of this into something that's in maintenance mode, possibly a bit much. Although, like, that being said, once you have this, last call has done that. Like, they inherited the site that we're looking at. Like, all right, because we know all the pieces, we've got the things, we can just drop it over here, figure out the specifics for this site, do that once, and now everybody's still in our same process. So, I think, yeah, hopefully that's a little bit more new to us in real world. So, just one more indulgence, can you tear it so that rather than, so let's say we're gonna push to our dev server, and we don't need to have a full-rich panoply of tests. Sure. That we can set up different workflows in that manner. Sure, yeah, yeah, yeah, okay. Yeah, I mean, you just have a conditional based on the branch that you're actually under test. Yeah. Thank you. Yeah, if you can write down the logic, you can teach it to a computer. I'm gonna keep it like this. I like Lemmy from Motorhead, you know? So. Okay, if you do this, then your little button on your dev environment on Pantheon is like no longer a thing, right? Yeah. So, what do you do to watch Pantheon's drops eight things so you know when eight got updated? Yeah. So, that's a very good question, and not one that I have a great answer to. Currently, drops eight is almost in perfect sync with upstream droop late. It would be nice if, let me just back up for a second. We're pulling it in through Composer. So, theoretically, if drops eight were to release a core-only repository, we could pull that in the same as we're pulling in Drupal core, and that would be no different. There is this concept of the Drupal Composer, I forget exactly what it's called, but where basically Composer downloads all of Drupal core and then pulls out the slash core directory, which makes things slightly more complicated for us, and is the reason that we don't tightly mirror drops eight right now. Does that answer the question? Kind of? Okay. And for those of you who happen to be using Pantheon and looking to adopt this in some ways, we have people at our booth that are happy to help pretty smart people can talk about Composer workflows, and like, how do I do this with, get those edge cases and explore it. So, you talked about having your local repository, which was just your custom in config only, and then pushing it through to create the artifact, which goes to Pantheon, but you also talked about local development and testing. Where does that fit in the cycle? Yeah, so locally, we're gonna be running the Gulp install command, just like we did up on CI, and so that's gonna pull down our Composer dependencies and that stuff the same way, and I think that's kind of a critical part of the process that we're running this the exact same way in CI as we will for local setup. Okay, thank you. Yeah. I have a simple quick question. Sure. Like, when you were doing, you know, SAS Compound at the very beginning of the slide, I noticed that you didn't, you only pushed the SAS file, not the CSS file. Right. So, generally, is it a good idea to include the Compound CSS in the key repo, or is it better not to include it? Much better not to include it, in my opinion, and this is kind of what this whole process is for in a lot of ways, is to make sure that you're only committing the bare minimum required to build the site, because there are a lot of risks of, you know, merge conflicts and that kind of stuff. I always run into that. Yeah. Yeah. And let's say, like, I made a change, a SAS change, my teammates wants to just work on top of my change. Yeah. And how does it usually work? Like, do you put like a post commit, put from the, yes, when someone pulls it, like compile the SAS right away, or? For us, you don't have any CSS. Locally, if I were to pull down your changes, I wouldn't have any actual CSS until I run the build steps. So that's run manually? Yeah. Yeah. So that's, again, where gulp is at. So locally, it's gulp. On the server level, it's circle. And again, it's, so philosophically, what you're doing is you're capturing the knowledge to create, like what's required to build the stuff, rather than the output of it, because like, it's a really common thing. Like, I'm using this tool to compile, so like, different version or slightly different tool. All of a sudden, like everything's a merge conflict, and that just kind of sucks. So have the tool do that, record what's necessary, record your source, your requirements, and then have the machine take it from there. Merge complex was a good keyword because we have a very similar approach in our workflow. We use also pull requests, but we already merged these pull requests after QA. And the reason why we do that is because our clients tend to be very lazy, proving or denying those change requests they post. Yeah. Do you have any, well, kind of magical handle, like you handle pull requests and resulting merge conflicts because I tend to believe that your clients are not that much faster than our clients? So I would say that our clients, like, I don't think our clients are any different from your clients, but I think that we as an organization are very cognizant of the fact that the longer a pull request remains open, the more risk is involved. That's the point why we merge really early in the process. Yeah, so we tend to push our clients very hard to get any in progress features merged in or buttoned up in a way where they go to a release branch. I don't know if that's an option for you, but I don't have a magic bullet. We try to push our clients to, but, well, they don't react every time. And I would say that's a class of problems that is outside of the scope of what computers can do. So, yeah, right. So tools will never solve, like technology won't solve human problems. And communication and responsiveness is a human problem. And while you can have a tool maybe help prompt someone, like poke them, send them, you know, text them every five minutes until they're approved. That's probably also not a good idea. But, so like this speaks to the, like this doesn't make project management go away. This is a safety net for developers to be able to confidently deploy change and know that what they've done is in process and they don't have to manage it after that and that it's, you know, pass all the checks, let that flow out of your body, move on to the next thing. I would also maybe point out that you could get some mileage out of frequently reintegrating master into your feature branches. You might try that. Well, speaking of Composer, the Composer log file is- Ah, yeah. That's the one that'll get you. That's the one that'll get you. Well, okay. I hope you have a magical solution. That's the next tool. Yeah. We'll invent that tool for next year. Yeah. Thanks for the talk. It's really good information. I just have a question about Composer and everything. Why don't you just take a snapshot of the live site and merge the code into it like really quickly to see if it, you know, causes problems rather than like rebuilding from scratch all the time? I think it's a test of the process and it's just consistency. So like making sure that locally, I'm using Composer install on Circle. We're using Composer install. It's just a consistency thing, I think. Yeah, I mean to me it's like the best test would be merging the code right into production to see what happens. I'm not doing that, but I mean not live to the real production site. Yeah, yeah. But taking a snapshot of production and merging the code and running tests. Can we just change it on the live site? Yeah, great. But I mean if your snapshot production, you know, do your post sync scripts, you know, just turn off Cron or whatever, merge your branch in, run your tests. It would take a lot quicker, right, than importing a database, downloading files, doing all the different stuff. Maybe. I guess you would still have to run like Composer install or whatever. Not if the files are already compiled in there, right? So if you just commit everything? You don't have to commit everything. I mean you have a production, at some point it has to get built. But once it's built and all the files are there, you just need to start adding to it. But what happens when you have a Composer change? Like when you add a new module, you would have to run Composer install. Yeah, or you have to have a make file that keeps track of that, or you have to like keep all the code in the repository, I mean yeah, there's different ways to do that I guess. Yeah, and I mean I guess. For trust as leading a bunch of developers, like I want the closest thing to live with the code merged into it to see what will happen. But that's what you're getting here. This is exactly, what's being built on CI is exactly what's gonna end up on production. We're essentially overwriting the whole repository on the Pantheon side every push. Right. And so the, so there's like maybe like an intellectual, like it seems a little bit much to install, like in order to bake is like a Carl Sagan quote, in order to bake an apple pie or make an apple pie, first you have to invent the universe. Like it feels a little bit like that. Like you're inventing, you're like creating environments and then putting a Drupal inside of it and then putting other things inside of that. And then eventually finally your JavaScript or CSS change makes it. However, there's like a sort of confidence thing and it's also the machines are doing it, it's okay. Like it's machine time and that's increasingly not a big obstacle. And so there's some time involved. To that point where you have to make a really quick change for a client really fast so you don't want it to break the whole site but you don't want to wait 30 minutes, you know. Yeah, all right, so 10 minutes is the speed to change. Yeah, like that's like, but can you do it reliably? There's becomes like a trust thing too because sometimes you're gonna need to install a module and is your process for installing a module slightly different than the process for changing CSS? And then are you managing two processes and it's two more than one? Is that, you know, not? Yeah, I mean there's a lot of things to think about Yeah, yeah, well I mean I don't think any of this stuff is like we're all still learning, we're all still growing, we're all still like having conversations. This is a point in time and you know, yeah. It's awesome, I mean we're halfway there, we're getting there, you know, this is great. Cool, thanks. Possibly to ask two questions. Okay, first one is why do we deploy to a gate artifact instead of going directly to and just pushing it to live? So, this is how Pantheon works. So you would need to, Pantheon basically requires you to have There's no other need for that, right? No, you could make a tar ball and send it somewhere. This workflow actually does give you a nice rollback process though, because you're able to, yeah, just roll straight back to whatever the last commit was on Pantheon. Okay, second question is I'm in a project that kind of breaks a little bit of the rules. We don't have feature branches and we deploy directly to production. So for getting, you know, best practices for Drupal, is there something inherently for Drupal that requires us to have feature branches and also add, you know, tests and depth into the whole thing instead of going directly to live? Nope. Nope. All right, it's maybe, it's worth looking into though. Yep. So this is out of scope for the presentation, just let me know. But so suppose I'm a little bit familiar with this LCM workflow. And suppose that in this process of working through the CI, I have to do security updates. And in doing that, I end up with a merge conflict in my composer.lock file. And because of circumstances, I absolutely just can't run raw composer update. What would be a workaround that you can use, because you don't wanna bypass the system and just push up the modules in some other way. But now I've got a jam up and I gotta fix it. And the nuclear option just isn't an option. Okay, so the reason that you would have a conflict in your composer.lock file is that you made a change in two places. And so really the best way to resolve that would be to rerun both of the changes without touching your composer JSON and then commit the result. If you open the pull request and code gets deployed to CI server, so you need a number of CI servers matching number of pull requests open, right? And until the pull request is closed or merged, the code is still checked out there. Great. So you're talking about the multi-dev instances. Okay, right, so you provision the new CI server for each new pull request. Yep. Let me, I think you asked something different, but maybe I'll answer something else and you choose which answer you like. So there's a new multi-dev instance for every pull request. So that's a number of multi-devs and that just happens with Pantheon's kind of cool feature. With Circle, I think you're asking about like how many instances of the Circle we're running. So I'm working on this client. I have pull requests coming through. Five minutes later, it's the same thing. How many threads of Circle are running? And I think for that, the answer is, last call has the pro plan or some mobile plan which gives you eight processes or something like that. Concurrent builds is what they call it, yeah. Yeah, so I mean it depends on your plan but really the answer is as many as you pay for it. Okay, all right, thank you. Yeah, and it doesn't like, and then the cost of not having enough is like it just waits a little longer. Right, exactly, cool. That seems to be the end of the questions. So thank you everyone for staying long. We're still around, it's early in the conference. We're happy to talk more about this all week. Also just as a reminder, contribution sprints are happening on Friday if you're not already aware. Everyone is welcome, everyone can contribute. So we've got people there to help mentor other folks. And last but not least, it's really helpful for us as presenters, as human beings, as people who've tried to do good work for us to get your feedback and for Drupal Conference to hear that as well. So if you wouldn't mind taking a moment, so rate this session as well as everything else you go to, that's good for all of us, helps us all get better. Thank you. Thank you.