 Thanks so much for coming, and this is the last session on the last day of sessions for DrupalCon, so thanks for choosing this one instead of going and having a beer early or taking a nap. You're free to have a beer during the session, but try not to take a nap. I'll apologize real quickly. I've got a little bit of a cold. That's one of the advantages of being the last session on DrupalCon. You have all of the week to catch the cold and then give your presentation. Let me just introduce myself. My name is Matthew Grasmick. I'm Grasmash on Drupal.org and Twitter and various places on the internet. I work at Acquia. I've been there for about four years, four to five years. I'm in professional services, and I've been part of the Drupal community a lot longer, almost a decade now, and I'm the maintainer of the Build and Launch Tools. So feel free to ask lots of questions at the end. And before I really talk about what these Build and Launch Tools are, I want to talk about the problem that they're trying to solve. And so we're going to start with a little bit of a walk down memory lane and talk about what web development used to be like 10 years ago or something like that. So in the beginning, there was the web developer, and the web developer had a lamp stack. And then there was a server somewhere, and we put files on the server using FTP. Who here remembers these days? There's a lot of people. Good. I was around for these days, too. So this was pretty good. It was fast, and it was simple, and we didn't have a lot of the problems we have today. But it had a lot of problems, too. We have reasons for not doing this anymore. We'd overwrite files. We'd do crazy things like have index PHP in, index PHP 1, and index PHP 4, and index PHP 10 final dot b a k, and it got really messy, and we'd delete each other's changes, and it wasn't redundant, it wasn't resilient, and if that server went down, God forbid, it's not like everyone has a copy of the whole site on their computers, so we had good reasons for going away from this. So we took this model, and we changed some stuff over time, not all at once, of course. But we replaced the server here with a cloud, and the cloud could be a lot of things, but it's usually more than one computer, and we used Git to deploy things instead of FTP, so we've got Git locally, and version control, the ability to revert, we've got redundancy. Awesome. So this was better, but it was also a little more complicated. I remember the first time I used Git, and I just kept overwriting people's changes with force pushes and not getting it, so maybe we're all there, I don't know, if there's a learning curve for all this stuff. Meanwhile, while our whole workflow changed, the way we got things into the cloud, the way we managed files, our backend tools have changed a lot too, so in the beginning we didn't have Drush, now we have Drush, that's pretty cool. Newcomer on the scene, Drupal console, the sunglasses, it's the Drupal console logo. This is not a representative sample, but we've got a lot of cool backend tools, and that's better, and it's also a little more complicated, you've got to learn all this stuff, but it helps you out a lot. This is the theme, it gets better, but it gets more complicated. Meanwhile, front end tools have changed a lot too, this is like a whole thing on the internet right now, everyone's complaining about this, especially JavaScript frameworks. So for example, SAS, this is a front end tool, it's great, you can use it to do things that CSS can't do, and again you have to compile the SAS into CSS, and to do that you probably have to install Node, and maybe you have to install NPM, so you can download the SAS plugin and do that, or maybe use Compass, and you need Bundler, and that needs Ruby, and stuff kind of gets out of control really quickly. Maybe use Gulp, this is what you have to orchestrate these front end tasks. Something's got to tell SAS to compile at a certain time, grunt is another example, so this is a little cross section of things you might have in your stack, so also better, cool new things, also a little bit more complicated. Meanwhile, testing tools came on the scene during this time, or maybe before this time, but particularly in the PHP community and the Drupal communities we started getting things like PHP in it, great, unit tests, we prevent regressions, that's awesome, PHP code sniffer, this sniffs your code, make sure that you're following Drupal coding standards, that's good too. Behat, this is a little more recent, you can write tests, they're functional tests, we use business domain, language, we can gather requirements with them, we can show them to our stakeholders, that's pretty cool too, but if you're using Behat, you might have to install Selenium, and Selenium requires Java, or maybe you're going to use Phantom JS instead of these two things, and this is better, but this is getting a little scary at this point. It's a little overwhelming, get some heart palpitations, a little sweat breaks out on your brow when you have to install all these things, meanwhile, we got all this stuff, we've got to install it all locally, so our local environments are a little bit different than they used to be, and we start using VMs and we have to provision those VMs with things, so I just added a Vagrant icon here, and Vagrant has to know how to put the things in the machine, so I put an Ansible icon on here, or maybe using Chef or Puppet or something like that, better, more complicated, meanwhile, this is the last one, the PHP community is evolving, and I'm just putting one icon on here, but Composer, because this is like a big one, everyone's having trouble with Composer lately, especially with Drupal 8, so here's another thing we're going to throw into the mix, and it's not to mention the other stuff in the PHP community that's coming along for the ride, like the popular YAML now and all the symphony components that we're pulling in, et cetera, et cetera, so these are all great tools, we had good reasons for adding all of these, there's compelling reasons to use these right now, every time we add one, things get a little more complicated, and then we're just talking locally right now, this is one person's machine, can have all this stuff, but if you have a team, you've got a lot of machines, right, everyone's got to install this stuff, and everyone's machine is a little different, and you have to worry about the different versions of things on different machines, onboarding can be a nightmare, and that's just with people's local machines, what if you have a CI process, so if there's an intermediary server in here, you have to push from your machine to GitHub, and then GitHub runs some CI process, that CI machine has to have all this stuff too, so this has clearly gotten out of hand, at some point it became really complex and it became painful, and this is the problem we're trying to solve, so let's talk about some real world consequences that might be familiar to you guys that come from this problem, right, one of the consequences is you just might not use this stuff, a lot of people are like, I'm going to nope out of here, this just looks crazy, I got to learn all of this, I got to make this stuff work together, I got to maintain it, never mind, I don't want to build a process, I don't want to use automated tests, I don't want to put together a CI process, I was in that boat for a while, totally empathized with that attitude, but we're going to try to change your mind if you're in that boat. The other option is, it's like the DIY option, you have to build your own stack of this stuff, so you painstakingly create a build process, you configure a CI process, and then you maintain it all the time, an update comes out to one thing in your stack and everything breaks and you have to spend a day fixing things, and nobody likes that either, so there has to be a way to do this better, right, to approach this consistently and efficiently, I'm going to pause and give you like a little background on myself and how I arrived at this conclusion, so I said I work in professional services at Acquia, the primary function of professional services is to help our enterprise customers implement Drupal, actually build the Drupal site, so I was in the business of actually building big site after big site after big site and doing this stuff every time from scratch, and every project was a little different, and at the end of the day you're supporting like 10 projects and they're all a little different, they all have this stuff, and I said we got to build a tool that we're just using one thing on all of our projects, and we should really open source it. So I looked at this and I said we're going to put that stuff in a box, and we're going to put that stuff in the same box, this looks a lot better, right, and we're going to call it build and launch tools and the acronym for that is BLT, so we took the boxes and we put a little logo on it, it's a sandwich, with a little toothpick and an olive, which is not really a BLT, it's more like a club sandwich, but you have to be a little forgiving about this, so what this box does is it's going to package all that stuff together, put it in the box, it's going to integrate it, we have to make sure all the right versions work and there's some configurations so this stuff works together, we're going to automate it, so you don't have to run like 200 commands for each one of these things, kind of wrap them up and create an orchestration layer, so package integrate and automate a stack of development tools for Drupal, anyone see the movie seven, says what's in the box, inside joke sorry, let's talk about what's inside the box, so it's an open source box and inside of it there's like three things at a high level, there's a standardized template, it's this is for Drupal 8 only, so it's a template for Drupal 8 sites, we'll get into each one of these, it's a set of tools, so we had to make a decision and say what tools we're using, which testing tools, which front-end tools, that sort of thing, so we chose those and then we wrote the commands to automate those things working together, let's go through one of these, each of these one at a time, so the template, pretty simple, you know what a template is probably, it's a predefined structure, so we decided here's where the files go, here's where the directories go, here's some boiler plate configuration and code that comes out of the box, there's a screenshot of the GitHub file structure, pretty exciting stuff, so this includes an opinion on where things go clearly, we say this is where settings files go, the export config, this is where it goes, modules, themes, if you have a custom Drush command, if you have local patches, you have to make decisions about these things, because we're going to write commands that are going to expect these things to be in certain places, so the template's the first step, this is like not earth-shattering, right, everyone's thinking like I use templates in Microsoft Word 20 years ago, actually this is kind of a big deal, this has a lot of important implications, so this is helpful, because it helps you do things like onboard faster, you know imagine you're working on a bunch of different projects and you come onto the project and you know where to find things, that's really helpful, or imagine you're working on a project and you call someone else up and they're working on a different project, but you have the same file structure and you're solving the same problems, and that other person wrote a script that you can just pick up and put in your project, because you're both using the same template, it's actually very helpful, same reason that it makes handoffs simpler, right, staff changes all the time you switch projects, it's nice to come into a place and feel like you're at home. Tools, so we all use the same tools in the same versions, as I said we had to make decisions about those, this is really helpful too, so there's testing tools, PHP packages, there's a few modules that we ship by default that we think everyone should have, you can take all this stuff out, well you can take the modules out, you can't take everything out, but this is customizable and so the opinion is sort of mitigated sometimes, and this is all preconfigured to work together, so this is also a pretty big deal, because it's not as easy as it sounds, but what makes this powerful and more exciting is the automation part, yay, this is also the really fun part too, it's part I spent all day doing and all night, so we make complex things simple, all of these things, you can do each one of these things in one command, I'm gonna go through a lot of these and show you the command to run, but imagine you run one command and you have a full Drupal code base afterwards with all of the tools that you need and the preconfiguration, you can do that, one command for booting, well creating and booting the Drupal VM in one command, so use Drupal VM to supply VMs, there's one command for that, there's one command to validate all your code, what do I mean by validation, I mean analysis of your code, does it meet Drupal coding standards, does it have a syntax error, does your twig look right, that sort of thing, testing, test you application, let's test your application, you can test your application as well, so you can run all your hat tests, all your php unit tests, all your other tests, one command, cool, other ones, this is like everyone's favorite and I feel like it's, I don't even use it, sync environments, so you're on your local machine and you just want it to look exactly like dev, run one command, copy the database, copies the files, clears the caches, runs your update hooks, builds your front end dependencies, whatever, that's all packaged up, and the last one, this is a whole like presentation itself, it's deploying the cloud, I'm gonna get into that later, that's a whole special thing, so seeing is believing, so I'm gonna show you, we're gonna use the command line, but it's like not super command line foo, it's a pretty simple command line, we're gonna start at the beginning, so creating a project, we use Composer for this, everyone's gonna be using Composer eventually, so you know it's not easy, but you gotta get used to it, sorry, we can talk about that later, there's a command the Composer has, it's called create project, I'm gonna talk to you about this command, this command lets you pass in an argument like aquia slash blt dash project, it goes and looks at that project out on the internet, it uses packages to figure out where that project is, it's usually on github, and it makes a new folder on your local machine and it uses that upstream project as a template for the new folder on your local machine, so that's what this is doing, and this is like an example project that you kind of clone from, gives you a good starting point, then blt does all this fancy stuff in hooks after Composer installs it to generate configuration and you know look at your machine and see out, see what the directory name is and put the right configuration, so there's a little bit of magic involved that's blt specific to, it's not just Composer happening here, and then you have a whole project, like at this point you could just push this up to github and somebody else could clone it down and do all the other stuff we're about to do and arrive at the same place, so this is all the files you need for a Drupal project ready to go, but of course there's other stuff you do when you run Drupal right, you need a database and we got to prepare a few other things, the first thing we're going to do is run another Composer command, which installs the blt alias, so you can actually run blt commands, type blt on your command line and it does something, it knows where to find the binary, so you run that once, never run it again, it's a one-time thing, but then you can run something cool like this, you run blt-vm and this'll, it's pretty smart, it'll look and say is there already a vm and if the answer is no, it'll say I'm gonna make one, and it copies default configuration into a particular directory, it runs vagrant, it boots the vm, gets you up and running, you do of course, there are some requirements on your machine that I haven't talked about, you would need vagrant installed for this, you need php on your local machine, that sort of thing, all these instructions are laid out in our documentation, but it's actually very straightforward, most of the stuff that's going to be already on your Linux machine or your Mac, and there are ways to get this to work on Windows too, but that is also another discussion, so you have a virtual machine at the end of this, that was pretty easy, let's do all the other local stuff, you have one command that is kind of just like do everything else, and that's BLT setup, so you know, you would think it would be easy, and it like, when you stop and think, hey I'm just gonna install Drupal on my local machine, it seems really simple, it's actually a lot of stuff you have to do, like you have to, just to install Drupal, it's like copy the settings file over, let's enter the MySQL credentials, let's make sure that the permissions are correct, let's make sure that the vhost settings are right, and I can get to the URL correctly, and then I have like a local settings that are particular to disable things that I'm not using in prod, like memcache, it's actually this huge thing, it's gonna do it all for you, so build your dependencies, install Drupal for you, take care of all that pesky file permission stuff, configuration stuff, import your configuration, because we do that in Trupal 8 now, right, we have it exported somewhere in YAML files, we'll take care of this for you as well, so now you're done, you have a fully functioning Trupal site, and you didn't have to do a lot of work, it was easy, you're welcome, so now you could actually log in, you can run, go into your doc root, you can run Drush ULI, this is a command if anyone doesn't know, this is a Drush command that just logs you in as user one, it literally opens your browser for you after you run this, and puts in one of those I forgot my password links that logs you in automatically, and you're user one, you're good to go, and it ends up looking like this, so that's great, and you only run four commands if we don't include the login command, four BLT commands, and you've gone from, you had nothing, you didn't even have a code base, you didn't have a VM, and now you're logged into Drupal, so that's pretty cool, now we're going to talk about automated testing, so don't freak out, this is where we might enter uncharted territory for some people, and automated testing seems scary, but it doesn't have to be scary, and we've made it much less scary, so little overview, we provide tools and example tests, so we'll give at least one example test for each tool, but we've got the hat and PHP unit as two testing technologies, there's an example of the hat test, an example PHP unit test, it's great for just, you can copy and paste it and change the words, and you've got a custom test, and the example ones pass out of the box, they run, and we've got validation, I talked about this before, PHP code sniffer and linting, so we'll start with the easy stuff first, there's a validate command, so you can run this, and this will just run PHP code sniffer, and it'll lint your stuff, it'll even actually check to make sure your composer JSON file is valid, and it knows where to look, because we're using a template, so it knows where your custom modules directory is, it knows where your theme directory is, it knows the extensions to run, you don't have to configure any of this, it already has the Drupal coding standards, you don't have to download that separately as a module, so that's pretty cool, and you can run all the tests, so BLT test, this would run all your behat test, and your PHP unit test, and it's going to do the whole stack of things you'd normally have to do to get to the point of running your test, so it'll start up Selenium for you, and make sure that Selenium started on the right URL, and wait for it to finish starting before it runs this stuff, and then it'll kill Selenium afterwards, you don't even really need to know what Selenium is, some of the time. You know, I will put a disclaimer here that BLT makes things way, way easier than they would be if you had to do it yourself, that being said, there's an Aquian named Mark Son and Bob who said a quote that I will quote, which is, there is a price for automation, and the price for automation is that you do need a more skilled engineer to debug when things do go wrong, because one day things will go wrong, and when things do go wrong, you know, you do have to debug it, and you do eventually have to become familiar with these things. That being said, it's way better than building this from scratch, but I'm not gonna, you know, paint too rosy of a picture here. So let's stop and appreciate this a little bit. For anyone who has not seen the Hat test run before, I just want you to see how cool it is. This is a video on YouTube. I'm running BLT test for hats. So this runs only the hat test. It's not gonna run the PHP unit tests. And what it's running is it's running the tests that ship with lightning. Lightning is a distribution that's the default distribution for a BLT site that we make, if we didn't specify otherwise. So lightning comes with all these tests. That's what we're running, and we can see some stuff scrolling by and it's probably small for people in the back, but this is this is pretty English. Even pretty English isn't pretty English, but it says things like given I am logged in with the administrator role and I visit node slash add, I should see the text basic page. That's the actual test. It's written in English and that's what gets checked. And I've panned over here to Chrome. And you can see when the hat runs, it's actually simulating a user. It is opening a browser. It is visiting pages. It's clicking things. It's looking to make sure that after it clicked things, the things it expects are on the screen. If they're not, that would be considered a failure. If they are on the screen, that would be considered a test pass. It's logging in, it's logging out, it's making up these usernames and passwords on the fly and working with the Drupal back end to make sure those are real users and it can log in. And the first time I saw this, I was like, whoa, this is really cool stuff. So this is, this is where you're going with it. And you can probably imagine how this might be applicable to you. Like everyone can think of a test that should run like this for their site. Like, oh, when my users go to the login page and they type in their username and password and click log in, they should be logged in. That's your test. When you run this test and you make sure you didn't break things. Let's cut that short. Oh, you know, I can't use my keyboard while the YouTube video is running. I can barely see that. Yes, okay. All right, so that's testing. What time is it? I got another 20 minutes. I'll try to leave some time for questions at the end. Let's talk about workflow a little bit. This is an ideal workflow. I'm going to walk through this. This is the workflow that I like, that I would like you to like. So on the left, we've got a developer and they're developing and they're building and they're testing locally and that's great. And when they're done with that, they push up somewhere because they want someone else to see their work, right? Let's assume it's GitHub. It doesn't have to be, but GitHub's cool. When it's on GitHub, this is where we do peer review, something you should do. Someone else on your team takes a look and they say, yeah, that's a good way to do that or no, that's actually not a good way to do that. At the same time, if GitHub can automatically talk to a CI service, like Travis CI or Acquia Pipelines, there's two icons, really any CI service will do. At the same time, when you submit that pull request, if GitHub can reach out and say, hey, I got a new pull request with some new code, do something with this. Travis, for instance, can pick up that code in the pull request and it can run the BLT commands. It can run the BLT validate command. It can run the BLT test command and report back to GitHub and say everything looks good or everything does not look good. That's the workflow we're going for. And if everything did look good, you can deploy and if everything didn't look good, then we go back and we fix things. And BLT's role in this is that BLT is on your, it's part of your code base. So it's on your local machine. It's available to do the building and testing on your local machine. Since it's part of the code base and your code base would be on your CI server too, BLT is also on your CI server and the commands for BLT are available to be run on your CI server. So with this workflow, what's great about it is you get to do a lot of QA and automated testing before you click your merge button, which is really important. So you can test your changes before you merge them. You can review your changes before you merge them. And if you're using something like Acquia Cloud CD or some other service that gives you an environment on demand where you can preview changes when they are submitted via pull requests, you can even look at your changes before you merge them. And what that means is when you hit merge, you're pretty darn sure you didn't break things or not badly, but it gives you the confidence to click merge. It gives you the confidence to deploy and it gives you the confidence to start following some agile adages like release early and release often. Right? If I can just hit merge and I know things are pretty good, I can release a lot faster. I can increase my velocity. I can deploy more often. I don't have that big human error prone QA process hanging over me. So anyone ever had a spreadsheet that's like a list of things to check before the deployment goes out? So I'm going to raise your hand if you've done that. Isn't that terrible? That's a lot of people. This is, we want to get rid of that. That spreadsheet is going to be a command. So that's the idea. So let's talk about how to add continuous integration. BLT has out of the box integration with two CI services. It can work with any CI service. You have an internal Jenkins server. You can just make that Jenkins box run these commands. But the out of the box preconfigured stuff is just for Acquia Cloud CD and Travis CI. So for those things, for those two services, we provide CI config that you don't need to figure out. You know, it's the instructions for telling something like Travis which BLT commands should be run when a pull request is opened. So in the case of Acquia Cloud CD, which has a feature called Pipelines, which is its CI feature, if you ran this BLT command, it would make a file for you. Acquia Pipelines YAML. That's the instruction file that tells Acquia Pipelines what to do. So when you submit that pull request, Pipelines would know, oh, I should install Drupal now and I should, you know, Composer install and I should run BLT validate and BLT tests. You don't have to set any of that up. Similarly, there is a command CI Travis in it. And if you ran it, you'd have a Travis YAML file pre-configured. And again, if you're using something else, like CircleCI or anything, and you just have to write your own YAML file, but it would work fine with BLT. And you could submit a pull request to me and I would merge it and then we'd support that. So you should do that. All right, let's say that we did, we initialized one of these things, like Travis. We got that Travis YAML file, which is the instructions that tell Travis what to do in a build. We committed it and we pushed it. At that point, GitHub triggers Travis to build and test your site. And I'm just going to show you in the GitHub UI what that's like. So imagine you push that up, you hit this new pull request button, and you created that pull request. You can see here, this was, you know, add test coverage for ACSF pull request. Got some pink arrows here and they show you yellow circles. And the yellow circles mean Travis is running tests right now. They're not done yet. And when they are done, that yellow circle will turn into a red X or it will turn into a green check mark. That means it has passed or it has not passed. You can click on those icons or you can click on that details link and it would take you to Travis where you could look at the log and see what the heck it's up to. So you could see if it failed, what's the error? You know, what are the commands that ran? And you can even configure GitHub so that it's not possible to click the merge button unless things passed. Unless you're an administrator. So that's helpful too. It's kind of like a gatekeeper for we don't merge code that's broken. And you shouldn't. So your test run before you merge so you don't break things. That's good. After you merge, when you click that merge button, your code deploys automatically. So GitHub, both Travis and Pipelines are smart enough to know the difference between a build that is a pre-merge build and a post-merge build. So after you hit merge, Travis would, for instance, say, okay, this got merged. Now I'm going to build the code base and I'm going to deploy it to an environment where it can be hosted, where someone can click on it, log in, interact with the code. So that's the last part of this. If it got merged, your CI solution would actually take care of pushing it to cloud. It's deploying to cloud. So I said this was special. So we're going to talk a little bit about this. Maybe I'll try to go quickly. Looks like we've got 15 minutes here. I'm going to introduce a concept some of you may be familiar with. This is a newer concept in the Drupal community. It is not a concept I came up with. This is actually an old concept in the software development community. If you came from a Java background, this might be familiar to you. We're kind of needing to adopt this more and more with Drupal 8 since we build dependencies and end up having them after running a command. We have this build process inherent in our development process. So let's talk about what an artifact is. It's a vocabulary lesson. It's this concept. You need to grok this concept that the things you need to develop your site are not the same things you need on your production website. You need different things. So one way of thinking about this is on the left, here's a little sampling of the things I showed before, vagrant, gulp, the hat, sass, this stuff you need during development. You need to test. You need to build. It's really helpful stuff. After it's on production, you don't use this stuff on production. It's just extra stuff. You're not going to run your tests on production. You're not going to compile your sass on production. It's already compiled. You don't have a virtual machine, like vagrant, running on production. You might actually have Drush. So maybe that one shouldn't be there. But what you want is just Drupal. And actually, you want a special sanitized copy of Drupal. There's some stuff you might want to do. There's like you want to take some local settings php files out and you want to take out the updates.txt that tells a potential hacker exactly what version of Drupal you're running if they happen to visit that text file. You know, there's some operations you might want to perform before you put it on production. So that process is our build process, the process that takes it from being the development thing, which you might call the source repository or the source code, and compiles it into the thing that goes on production, which we call the artifact. It's the product of the build process. If you're from the front end world, another way to think about this is like if you're familiar with SAS, SAS is great. You write in SAS, but you don't put SAS on your production server because Chrome doesn't read SAS. You would need to compile your SAS into an artifact first. The artifact is CSS. I'm going to beat a dead horse here, but here's another way to think of it. On the left is your file system for your development environment. On the right is your file system for your production environment. One has a lot more junk in it. The develop environment has things like, I can't even read from here, it's got your vagrant file and it's got test directory and it's got lots of .files, stuff that you don't need on production. That should all be cleaned up, made into an artifact, which then goes onto your production server. In terms of workflow, just to divide this 50-50, left-hand side, development environment slash, source code slash, working repository, there's lots of terms for this. Right-hand side, artifact. So the thing you work with on your local machine, the thing that GitHub has stored, the thing that Travis would test, that's all the source. It's not until you do the merge and are ready to go to an environment that it gets built into an artifact and deployed. Why are we doing this? You might ask yourselves. Glutton's for punishment. Now there's good reasons. These are not all of the good reasons. These are just the high-level good reasons. This is more maintainable in many ways. We don't actually commit contributed modules and vendor directories in this model. We download them at build time. That prevents people from hacking core in an undocumented way and from hacking up your contributed modules. It forces them to use best practices like creating a patch and adding patch application to your build process. That's nice. If you have a build process, you have this scripted, repeatable way of creating your code base. So you know that everyone who has a code base on a machine that's running ran a build process that did things like reverted configuration, reverted your features, ran your update hooks, cleared your caches. Everyone got to that artifact using the same process. That makes it more repeatable. You don't go through that debugging process saying, uh, did you clear your caches? No, it's done. So performance, this removes lots of unnecessary stuff. That's great. Security, we talked about sanitization. So lots of benefits here. I'll show you this, but showing you this isn't that helpful because most of the time, this is automatic. You wouldn't actually run a deploy command on your own. Travis or Acquia Cloud CD would do it for you. But if you're really interested, yeah, you can do it manually. You can do it on your local machine. Just if you want to see what the artifact is, if you want to get a sense of the process, you could run BLT deploy on your local machine. That artifact would get generated. Also, if you didn't have a CI process, you could still use BLT. You just have to run this manually. So it's good for that, too. This is getting really into the weeds, debating on whether to talk about this. I want to go over this briefly. You might be curious like, what is this actually doing? So what, what the deploy command is actually doing to generate this artifact? Copies a lot of files around. It will compile your front-end assets, build lots of your dependencies. Interestingly, you know, Composer has dev versus not dev dependencies. This does your require array, not your required dev array. So you make sure you have only the product required dependencies. It does it in a way that is optimized for production. So auto loading happens faster. Sanitizes your directories. And it commits everything, then. You've got to remember, I said before we don't commit our vendor directory. What I meant was we don't commit it to our source repository. But you do have to, at some point, before things go live, commit everything so it gets there. That's your artifact. You do commit everything to your artifact. And of course, the branch we create, we have a special name for. I usually just put a dash build on it. I just got a dirty look for that. And, and we push it up somewhere. So recap. BLT is a box with a logo on it. And it's got three things. It's got a template. It's got a few tools. And it's got commands that help you automate those tools. It's got, those commands make complex things simple. So there's one command for each of these things. Create projects, boot VM, validate your code, test your application, deploy stuff. There's more commands than this, but these are the important ones. The goal is to have a workflow like this where you're developing locally. You've got a review process. You've got some sort of CI service built in that can run your BLT commands and then you end up deploying to cloud. So that is it. Thanks everyone for sticking around for that. We've got 10 minutes for questions, which seems perfect. If you have a question, please walk up to the microphone and ask. That way the people on the internet can hear what you asked. I'll respond to it. So if you use SVN, you're screwed. Would you say? If you use SVN, you're screwed. Oh, SE8? Subversion. Oh, yeah. That's not going to work out. Okay, thank you. No, I mean, that's not strictly true. We could probably build that in with minimal pain, but I think you'd have to build it and contribute it, to be honest. And I'd be happy to help you out. Anyone who sees a feature here that is lacking and would like it, very active issue queue. It's all open. It's all on GitHub. We merge pull requests. We review them. We're very open to you guys telling us what should be on the roadmap and what's important. So please be active. That would be great. Excuse me? Just quickly, have you found the need so far in the deployment process to have post-artifact creation but pre-deployment testing? Kind of. So we actually implement the concept of hooks into BLT. So if you have something special you'd like to do at a certain point in the build process, we give you events that you can respond to to do that. One of those is a post-deploy build hook. So if you had something that you wanted to do right at the end after the artifact got generated that's custom, you could do that pretty easily. Just kind of a follow-up or clarification. I mean sort of a pristine 99.9% like prod kind of environment for testing without all the other things. So you might run Selenium tests or completely front-end tests and integration tests, but nothing that would involve like PHP unit or that sort of thing. Is that supported? Could I add an environment into that workflow process pre-deployment but post-artifact generation? You can certainly use BLT for that. But in the example of the CI servers, you know I'm saying Travis you could use, you could use, you just have to either use a service or use your own stack that would provide the environment you're looking for. And once you have that environment, however you'd like to create it, you can run BLT commands there. Okay, thank you. Sure. Hi, Martin Dorson. Thank you, first of all. That was fantastic. So I am 99% on board with everything you said by one question slash concern. So a big fan of Composer as a development tool but especially in the enterprise context, I'm curious if you're concerned about, you specifically mentioned not committing core and contrib. My concern is for enterprise teams treating core and contrib as a black box and not even having any understanding of what's going on there. And even if they're not actively writing that code, if it's committed, they're gonna see and be somewhat forced through osmosis to at least see that code go through their pull requests. Are you not concerned about sort of treating that as a black box? I haven't heard that concern before. That's, I think it's a valid concern. In my experience, I have not seen this turn people off to modifying Contribute Modules or Core Modules. Although my experience with people who are already used to using Truple, I would say that you do have this code on your computer. You're running Composer install on your computer. That the Contribute Modules and the Core Modules are getting executed and so it's available to be modified. It's just that the modification in order to make it to production must be captured as a patch that is then applied during the build process. It's hard for me to imagine the situation in which you'd need to hack core or change a Contribute Module in which you decide, no, I'm not going to do that because it's not committed. My concern is mainly to have some making sure that the team has a level of understanding of what's changing in Core or Contribute Module, even if they're not actively involved in that development process. I mean, you could go all the way down and say, well, you don't know what patches are going into Apache or to PHP or something like that, but I consider Truple, Core and Contrib part of the core competency of what we're working on and developing. I'll tell you this, this is a matter of opinion, how you want to run a project, and that's a totally valid opinion. If you want to commit them, you can commit them if you're not going to stop you. We give you a default get ignore, and that default get ignore ignores the vendor directory. You can just change it. You can commit vendor. So we're actually one of the adopters of the process and working with you guys on an initial implementation. One thing that you didn't mention, which is a huge issue within enterprise implementations of this sort of project, is firewalls, proxies, network considerations, SSH ports being closed, and I want to know how you're going to be addressing that. Yeah, that's a great question. I think those concerns are absolutely valid and we've run into that stuff. They probably are not going to be solved. It's probably not correct to solve them in BLT because those problems usually arise from using Composer directly, right? That's got to reach out to a lot of places and if those places are blocked by your firewall, that's a problem. Drupal VM, if you wanted to use that. If you build a virtual machine, you're using a lot of package managers and you have to download all the stuff that goes into the virtual machine. So that's similarly a problem. If you have a CI server, you do need SSH access between machines, but again, this is outside of BLT. However, there's lots of ways to solve this. So plenty of enterprise customers have a requirement where they have to lock down port 22, no SSH access. A lot of CI providers have enterprise products where you can put it on your infrastructure. It doesn't have to make calls outside of your infrastructure. You have a private instance of Travis or if you're an Acquia customer, it just runs on Acquia infrastructure anyway so you don't have to worry about it leaving your network. If you're worried about Composer, for instance, there's a lot of Composer solutions. You can have a private proxy where you can cache Composer dependencies and you can whitelist what's allowed to get Proxy there. You can perform your static code analysis on the things in the proxy if your IT team says, hey, we don't put things that haven't been scanned into our project. There's lots of other fancy things you can do with Composer caching. So that's a good concern and it's just going to be applicable to Drupal 8 in general. As we say, the way we build Drupal 8 is by downloading a bunch of other stuff from like Symphony and other places. How are we going to deal with that? And so there's ways to deal with it but mostly they won't be dealt with inside of BLT. And the swapping out the CI servers, so we're going to adopt Bamboo for our servers. How easy is that with configuration changing like Travis between Bamboo to Jenkins? I think it's pretty easy. It depends on your level of familiarity with this stuff but you've got a CI process. If you don't use the one that BLT provides by default, you just have to decide what do you want to happen. Do you want just build and test? Do you want build, test, validate? Do you have some extra other stuff? I would say like our YAML files were less than 50 lines that would instruct Travis what to do. 10 of those lines maybe were BLT commands. The other lines were making sure the environment was ready to run Drupal. So PHP is installed, Composer installed, the memory limits are right, sending emails is disabled, XDbug is turned off. There's a bunch of stuff like that. So that'll be on you to figure that out. That's probably the hard part. Calling the BLT commands is going to be pretty easy. Thank you. Sure. Assuming this is run on a Mac, understanding that I'm going to be cloning a repo that's at that location pulling it down and running an install command or I'm assuming there's some instructions on how to do that on the Git page. Yeah, that's a good question because that is a common misconception. You should not clone this project unless you want to contribute to BLT. You can go to this project page and read the install instructions which will instruct you to do what I showed you earlier which is to run that Composer create project command that has a different project that uses as the template and the template relies on this. So it is a little bit different. I'm glad you asked that. Can't just do brew install BLT. You cannot do that. It's a whole conversation. It's because BLT is part of your code base. It's not a global library on your computer. It's a dependency for Composer JSON file for one project. So you could have 10 projects and they'd all rely on different versions of BLT and that would be okay. If you brew installed it, you'd have one version of BLT on your computer and it wouldn't be project specific. Okay. Does it have a way to track the B-HAT test via a Git repo? Is it part of the main repository for the project I create? So it would just update the directory that I put the B-HAT files in. You'd commit your B-HAT test directly to your project repository. Great. And where does it put the code for the Drupal installation? Say it's run on a Mac. Where would I start finding that code? Well, it's wherever directory you happen to be in when you ran that Composer Create Project command is where the code's gonna go. Great. Thank you. Sounds good. I'm gonna do it in the next two days. Good luck. I'll be around for the sprints tomorrow. And this afternoon, if you guys want to just see me, sit down next to me. I'll walk you through it and I'll help you overcome any problems. Great. Do you have a separate sprint for this? I don't. I don't. I'm just gonna be hanging around. Is there a boff or where are you gonna be? I'll be in the sprint rooms informally sprinting on this stuff. Thank you very much. Yeah. Reminder, there's sprints tomorrow. It's supposed to be... I'm supposed to say that. Thanks a lot. Thank you. Hey, I'm an enterprise customer with Acquia, but I have to use Windows at work. I wish I could use Mac. Is there any hope for me to be able to use BLT? Yeah. That's a good question. So, I'll just say this. We've tried very, very hard to get lots of things running on Windows, and we've had very smart people doing it. And we've succeeded, but it's been really painful, and it's always gonna be painful, even if you're pretty smart, if you're trying to run right on Windows. There's ways to make it less painful, and there's certainly hope, but let me describe the problem. The problem is not that BLT doesn't work on Windows. The problem is that BLT is a stack of stuff, and not all of those things work on Windows, because it's like 100 things made by 100 different people, and they're not developing on Windows. That's the... And their tests aren't running on Windows. So, BLT will work, but then you'll try to install NPM and install a SAS package, and then that's gonna have a problem, and then you're gonna try to automate it, and it's gonna break, and you're like, ah, BLT. But, you know, it's kind of like turtles all the way down at some point with the issues you hit with Windows. So, to get around that, virtual machines help. You know, if you can solve Drupal VM, and you just have a Linux machine in your Windows machine, that's a little better. Windows 10 has built-in a Linux sub-shell, which is like, basically Microsoft is shipping a Ubuntu virtual machine with Windows. It's kind of the same thing, but Microsoft is giving it to you instead of Drupal VM. That works too. It's a little faster. It's a little more setup work. We have extensive documentation on how to do it on BLT's website. Third thing you could do, which I would do if I were you, probably not allowed to do it, is just reformat your computer and put Linux on it. That's, I mean. I'll get right on that. I'm just saying. Thanks. So, actually, on that note, so we followed, sorry, yeah, so we followed Jeff Gehrling's blog on the Windows setup, and with just a few tweaks, we have now Windows completely up and running with Vagrant and Ansible. Congratulations. Yeah, so let's do it. That's great. Yeah, you can do it, and it's a lot better than it used to be, but I'm still scared of it. Anyone else have any questions? They don't have to be, it's just being whatever you're thinking. No? Okay. Oh, one more. Thanks a lot. Just one question. I noticed Vagrant was there. Is there a Docker? Can you use Docker? You can actually use any LAM stack you want. I should have said that. This BLT is LAM stack agnostic. We ship, we have a command that creates a Drupal VM by default. I use MAMP, and that works fine. You can use whatever you'd like. Jeff Geerling, who maintains Drupal VM, is porting it to Docker, so it will support Docker as soon as that happens. Okay, thank you. Sure. Thanks a lot, everyone. Thanks.