 So, I'm Barry Jaspin and I'm talking about easy Drupal hosting life cycle, how to have your site running in a development staging and production environment and various ways of going about doing that. Other people at this conference, there are some similar talks or talks on more details of various aspects. For example, I just attended Marcus DeGlas's talk in the previous session where he went into a lot of detail about WebAstrono and some similar other tools. So there's a lot of different pieces of information. This is going to be one take on it. So before I begin, this is probably a review for most of you, but just to get some terminology clear, I'm going to talk about the anatomy of a Drupal site. So every Drupal site consists of four things. One of them is, of course, the PHP code, which runs the site. Then, of course, there's an SQL database, which is the primary Drupal database for the site. There's a files directory that's the user uploaded, you know, images, avatars, PDFs, content you attach, static files that you attach. And then there's the settings file, and my abbreviation for settings is a bit unfortunate, but that's what fit in the image. And, of course, that is what ties the database and the files to the code or tells the code which database and files to use in a single site setup. It's a, the settings files are pretty simple affair. Of course, Drupal also supports multi-site arrangements, where the settings file becomes a little more interesting. In a multi-site arrangement, of course, you have still the code, but then each separate domain needs its own database and its own files directory and its own settings file to tell the code for any particular request, which database and which files to use. So in the previous slide, there were four pieces that you have to manage. In this scenario where you've got two domain names, now there's seven things, seven items you have to manage. And that makes it a little more complicated. If you want to have multiple environments for your site, there's just more pieces to track when you do actually have more environments. So here we've got seven pieces, but in a multi-environment setup, you actually need a lot more than that, right? So if you have a staging and production environment, now you've got 14 different pieces to deal with. And in fact, since the best practice that I'm going to talk about is to have at least four environments, a local development environment, a on-the-server development environment, a staging environment, and a production environment, it doesn't all fit on this slide, but you're talking about 28 separate things to keep track of and synchronize from point A to point B. And it's just a lot of stuff to deal with. So let's walk through, pretend you only have two environments because that's what fits on my slide. And let's just review some of the things that you have to do if you want to work in a multiple environment system. So you start off, and I'm going to actually define some terms that I use on the next slide, but some of you, most of you are probably familiar with version control, you start off, you've got your development, your main branch, maybe it's trunk or master, depending on what you're using. And you create a branch to push into your staging environment for testing. OK, you branch your code, and now you deploy that code on your staging environment. Now, after you've deployed the code, you can start to run it. But first, you need to make sure that your settings file for foo.com and bar.com are porting to the right database and files directory. OK, but now you're ready to go. You probably want to work with a copy of your production database when you do your test with your new code. So you copy your foo.com database across, and of course you copy your bar.com database across. And that's fine. And now you do your test. And let's suppose, Miracle, your test goes really well. You're ready to deploy live. OK, before you can do that, you want to make a backup of your production foo.com database and a backup of your production bar.com database. OK, now you've got to make sure your settings file is correct for productionfoo.com and for bar.com. Finally, you make a symbolic tag in your version control system so that you know the exact code that you deployed this day. OK, great. Now you copy that code over to production environment. Now it's live. Fabulous. OK, how many people in the room have ever pushed code to a live environment and only then discovered there was a mistake? Yeah. Then you got to revert the whole thing. That is a pain in the neck. But that's what you have to do if you want to have lots of environments. So before I begin talking about managing all these different pieces, I'm going to talk about two basic tools. Again, probably most of you are familiar with these. The first one is developing locally. I use that. I refer to this later. All I'm talking about there is having some kind of local development environment on your machine, your Windows or Linux or Mac machine, where you can run PHP and MySQL on Apache so that every developer, whether you have one or many, can all work in a local environment. And they forgot a semicolon, so they add a semicolon. And then they have to go R-sync or make a source code commit or whatever. It's a pain in the neck. Then push the code up to the server for every little change you want to work locally. So there's a bunch of tools for these. I'm not going to go into any detail. They exist for XAMP for Linux and Windows. There's MAP that works on Mac specifically. Acquia's Dev Desktop works on Windows and Mac. They're all fairly similar. As I said, I'm not going to go into any detail. But when I say develop locally, this is what I'm talking about. The other standard thing I'm going to talk about is version control. So prior to using version, some people still manage their code on their servers by just editing locally, maybe FTP-ing up, or maybe they even edit the code on the server. What you'll see in a few minutes is that for a multi-environment setup where you're using a separate development stage and prod environment, you really can't do that. It gets too complicated. You forget which file you copied where. It becomes a disaster. So you really have to be using version control. You should be using it anyway. Lots of people walk around Drupalcon talking about different ways to manage code. There is a very simple model that works. It's pretty much the standard model. It's the main branch, release branch, and tag model that I tend to focus on. So the main branch, if it's SVN, it's called trunk. If it's Git, it's called master. That's where you do your live development. When they have a patch, that's where they put it. All the different developers do their initial integration there. They create a release branch when you want to have something to demo to your client, or it's a release candidate. You're going to push it out. And then when you're finally done with that, that release branch is ready to go out. You make, as I said before, a symbolic tag. You draw a line or a circle around it, and you say, this is the exact version that I deployed on this day. It never changes. You have a reference for all time of the code. The tag is immutable. And then, backing up one line, if you ever need to make a new release hotfix from that line, let's say you cut version 1.1, and you make a tag, 1.1.1, and you release it. And then you discover, oh, my goodness, there's a bug. You make that in the release branch, 1.1.1, and now you cut tag 1.1.2. Meanwhile, in your master branch, maybe your developers have progressed and added all kinds of cool new stuff, but it's not ready to release yet. So it's progressed to, who knows what point, that's why you need the release branch. That's the code from which your production tag is derived, and you can make a small change into your release branch, push it out to production. And then, of course, also make that small change in your trunk. I'm not going into a whole lot of detail again. I know most people know this already. I'm just sort of defining some terms for later. The important thing to understand, as I said, this is sort of the simple, standard model. There are definitely other models out there. Some people feel like trunk or master should be what you're running in production all the time. That works fine, but it's not the easiest way. If you want to use one of those other models, probably you already know that. You don't need me to tell you how to do it. Very, very briefly, I'm gonna talk about the Holy War. Git versus SVN. Here's my position. They both work. They're fine. Here's how you do basic things that we're gonna talk about in this session with SVN. Here's how you do exactly those things with Git. Bottom line, both of them work fine. The reason I put this slide in, besides just sort of defining my terms, is there's a lot of discussion about, oh, you should be using Git because you can do a rebase and a feature branch and a bisect and then do the other, and it can get very complicated. And so my advice is, if someone is telling you that you wanna do something really, really cool sounding with Git and automate the whatever, if you don't know what they're talking about, smile, say yes, thank you, and then just walk away. When you need that level of complexity, you'll know it. If you don't already know you need it, you don't keep it simple. The standard model, main branch, release branch, tag works great. Okay, enough of the preliminaries. So I'm gonna talk about three ways to manage a site across multiple environments. There are lots of other ways. In fact, in Marcus's talk, he had a whole slide that just listed the name of various tools that you can use. There's things from the Ruby world and things from the Java world that you can apply. And there's, I'm not gonna talk today about AGR or there's something called DrushDeploy, which is in development. I'm just talking about three. The three that I'm talking about, one, doing it by hand, right on the command line, my SQL dump using SSH, because I think it's important to understand sort of what the underlying operations are that are being automated for you by all of these tools. Once you understand what's going on, then it's nice to not have to do it anymore, but it's useful to have the basic model. I'm gonna talk a little bit about how Drush simplifies the basic command line options, ways in which it makes it better for you. And then I'm gonna talk about DevCloud, which is obviously a product of Acquia. So, managing code. So we've already talked about, you know, you always wanna start your developing locally, right? I write my code right here on my MacBook or whatever machine I'm using and I commit it or I test it, you know, whatever I wanna do, I'm running it with a local copy of my database today. When I'm ready to push it to my development environment and the purpose of the development environment really is the initial integration testing among multiple developers. So if I'm the only guy working on a site, you know, to have my local development environment on my notebook and a development environment on the server, it's a little bit redundant. But if there's three or four or more developers working on it or even two, then I'm doing some local development and my buddy's doing some local development and we may each work great on our machines because I don't have his code and he doesn't have my code. The development environment on the server is the initial point of integration where you find out, oh my goodness, we both defined a function with the same name and the site white screens, right? That kind of fits. So you develop locally and now you need to commit it to your, to whatever repository you're using, you know, Git push or SVN commit or whatever. And then you've gotta push that code up to the dev server. And again, the reason that's important is if we don't do that, otherwise, you know, there may be basic integration bugs that don't work. So how do you push the code up to the dev server? Well, one common way. Some people deploy straight from a version control system. There's reasons not to. There's advantages to it as well. You log into your server, so slash dev in this model, that's your dock route. Probably it would be slash var, slash dub dub dub, slash something, but that didn't fit on the slide. So you CD up and you use Git pull or SVN update or if you don't wanna do that, you run our sync. You copy the code up. That works as well. Then you wanna, and now you do your integration testing and it works great. Okay, so now we're ready to make a staging release to do a demo. So we need to push it to the staging environment or for our release candidate, say. So again, you wanna create a new branch. All right, you use your command line tool. You say, you know, Git branch or SVN copy or whatever, however you wanna do that. You push that code up to the version control system and then you've gotta log into your server and CD into the directory, in this case, slash test. That's the doc root for your test virtual host and pull the latest code in. Again, here I'm showing you can do it with your version control system or you could do it with our sync. Okay, assume your test went really well and now you wanna push your code up to production. So you do the same thing again. You make a tag, you can use the Git tag command, you can use the SVN copy command, push that up to your repository, then log into your server CD into the production environment. In this case, pull down the latest code with SVN update or Git checkout or our sync or whatever you wanna do. And it's kind of a pain in the neck and that's why there's so many tools for doing it easier. So how do you do this if you wanna use the Drush tools? Again, you start, you develop locally. You always start, you develop locally. Then you commit your code. Again, you've gotta do that integration testing in your dev environment. Okay, so now we're gonna use the Drush tools. We wanna push to development. How do we do that? Well, before we saw SSH CD into a directory and run Git update, Drush makes this a lot easier. You just type drush rsync at local at dev. I don't know if everyone in the room is familiar with what's going on here, but in Drush parlance, those things that begin with an at sign are called aliases. And there's just a config file that you create once somewhere. So you don't have to type it over and over. And you say, when I say at local, what I mean is on my local machine, the directory, you know, users bjaspin, site slash, you know, mysite.com. And when I say at dev, I mean on server12.whatever, here's my SSH key, and it's in the directory, you know, vardubdub slash dev, wherever I put that code. But then having done that once, you can figure that, you set it up, it's pretty simple. Then you have, you know, shorthand references. And what this does is it rsyncs your code from the local environment to the dev environment. Now I do wanna, there is a problem here. Remember, the whole reason you have the dev environment is if you've got multiple developers working on a site, you need to do basic integration testing. Well, if I do some development, commit the code, run rsync, and it runs in the dev environment, works great. My buddy, she writes her code, she commits it, she pushes up to the dev environment with rsync, she runs it, it works great. The problem is that if I forget to do a Git pull or an SVN update, I don't have her changes. If she forgets, then she doesn't have my changes. Both of us have pushed our code to the development environment. We haven't really done integration testing yet because neither one of us had each other's code. So that's a downside to using rsync that I referenced earlier, and this is the way that Drush makes it easiest to copy your code around. So you just have to be aware of that. If you're using this, everyone has to remember always do a Git pull or an SVN update before you do the rsync so that you know you're pushing the most current integrated code. Now okay, your dev's done, your integration's done. You want to push to staging to make a release candidate. You locally make your branch using whatever tool you want and then you drush rsync, push your code up again. Much simpler than that SSH command we had to type before. Again, here I've assumed I've defined an alias called attest, and drush is smart enough to know even if dev and test are on different servers, it'll make an SSH thing and pipe it through and it handles all that for you. All right, last stage, I want to push it to live. Again, looks very similar. I make my tag and I can use drush again to rsync for my test environment, to my production environment, this pushes my code up. Great, much easier than remembering those long commands. This works well, a lot of people do this. A third option that I want to show you is how it works with DevCloud. So again, you develop locally. You always develop locally. You commit your code to your version control system. Looks very similar. On DevCloud, as soon as you commit to the master branch or the trunk branch, the code is automatically deployed in your dev environment, so that's just a step you don't have to do, we do it for you. When you want to move your code from development to staging or production, there's a drag and drop UI you can use. So what this shows is in the UI, you've got three environments, DevStage and Prod, you see that the dev environment is running under the change two box, it shows you two things. One, it shows you what you're currently running and on the next slide it lets you change it, I'll show that. But this shows that dev is running the master branch, so it always has the tip of that branch deployed. And when you drag that to either of the staging environment or the production environment, what it does is it creates a symbolic tag just like we did on the command line in the previous slides for the tip of that branch and then deploys that code in whichever environment you drop it in. So here you can see that production is running something from July 14th and the last time we dropped something on staging was August 12th. This system by default names its tags with the date. We found that works pretty well, but other options are possible. So this makes it very easy to push your code across different environments, though you still have the option of manually cutting your own branches and tags if you want. You don't have to use this UI to do that. What I didn't address with the other systems that I'll show you here is the revert step, oops, oh my God, I've just pushed out a bug. On dev cloud the other thing that changed to drop down as four is it just shows you all of your branches and all of your tags at any time. You just select one, you press confirm and it pushes the code out to that environment. You could do this with the command line, either manual command line ways or the drush. With drush what you would have, with either of those what you would have to do is using your version control system on the command line check out the branch, the version, the tag or whatever that you want to push live and then type the drush command or the arson command or whatever again. So it's just a little bit more manual. Frequently when you're trying to revert code you want to do it as fast as possible. So that's why I showed it that way. So that's code. Questions before I move on? All right, well, since we did talk about pushing out release candidates in that segment, which of course release candidates abbreviated by RC I just thought we'd take a little break and celebrate with a little bit of RC. You know, just to keep things entertaining I decided to bring this with me. Wee! Okay, that's fine, great. Bring this home. Whoops, no, come back, come back. No! Okay, great. So, sorry I couldn't resist. That's what we do in our office all day long when things get stressful. We start flying helicopters and shooting them with Nerf guns, so. Okay, managing files. So remember, Drupal site has four basic things, code files, database and settings files. I'm actually not gonna talk much about the settings file in this talk. We talked about code. Files work pretty much the same way with doing it basic way and on the end with Drush. With files you pretty much always use R-Sync. There's no, you know, they're not versioned. There's no problem with my buddy making a change to a user uploaded file or me. So you have these really sort of awful looking SSH commands. No one, no human would ever really wanna type these. But this is what it looks like, right? So, to copy from local to your development environment you use the first bullet, right? So I'm assuming, you know, .slash, that's I'm running this on my local computer. So I'm here, I'm syncing my files directory up to my dev environment, you know, the next two sync up to the test environment or the production environment. Pretty straightforward, but kind of a pain in the neck to type. Drush definitely makes it a lot better. So here, again, you use the Drush R-Sync command, which is the same one I showed for pushing code up. But what Drush defines in addition to the aliases for at local, at dev, et cetera, it defines, I don't know what its term is, but this percent thing where it's a reference to a path. So here you can say I want, you know, the files directory from my local environment or the files directory from my test environment. And this is a symbolic reference to it, so this lets you copy pretty easily from one environment to another. Much, much simpler to type that. You can obviously also you can script any of these things. One comment on the manual thing on the left is it's actually a lot trickier to make those work if your test environment and your production environment are on different servers, R-Sync actually can't, you can R-Sync from local to remote or remote to local, but you can't R-Sync from one remote to another remote. It just doesn't support that. You've always got to go through the local machine. So if you do try to do that in one step, what you end up with is you SSH to one, and then from the one, you R-Sync to the other, so you end up forwarding your, it gets a little subtle, Drush does that for you from one way or another. I'm pretty sure the R-Sync command is smart enough for that. But that's pretty much, it's fairly straightforward. The R-Sync is a way nicer to type under Drush. The way we do that on DevCloud is just like with the code, it's a drag and drop. Interesting thing is that on the, let me back up a little bit. What this is letting you do is, in this example, I'm copying sites, default files. These aliases that I've defined, I haven't really talked about it, but you can define different aliases for different multisites. In my initial anatomy slides, I was showing you how you can have foo.bar and foo.com and bar.com. Well, you can make an alias for local.foo, and that's specifically the foo.com database and the foo.com's files, so then when you say percent files, Drush is gonna know at foo colon percent, sorry, at local.foo colon percent files, if you set the aliases up right, it will know that that's actually not in sites default files, that's actually in sites slash foo.com slash files. So the Drush approach makes it really easy if you wanna sync your files for one multisite at a time. That makes it pretty straightforward. Doing it the left hand way works as well, but you gotta type the paths in all by hand. The reason I'm making that distinction is, one limitation currently of the DevCloud user interface, we do have the drag and drop for moving among your environments, but it syncs all of your environments, all of your multisites at the same time. So that's why it says all sites colon files. So when I drag from my staging or my pre-dev or wherever to wherever else, it basically is taking the whole, all of the files directories under site simultaneously, which is frequently what you want, but if it's not, that's fine because you just fall back on the Drush way and with DevCloud we define cellular analysis for you so you can just run Drush for individual sites if that works for you. For managing databases, it's not too different than managing files. So the sort of the manual and Drush ways, again, look fairly similar. The manual way, of course, the way all of these systems work for copying individual databases is MySQL and MySQL dump. So on one server, you SSH in, you run, so it's a little interesting. If you look at the second line first, you SSH in the server, you run MySQL dump, give it the database name that's the source and you pipe that through MySQL to the destination name and it loads it in. And that works fine except that when you do this operation, you sort of expect, okay, now my destination is an exact copy of my source. Terrific. But suppose for example, you're doing some development and you've added a new field, a CCK field in D6 or a new entity or field in D7 and you've pushed that to your staging environment and you copied your database with a command like this and then you discovered you made some kind of mistake. Well, that's good. You're still in Dev and testing. So you go fix your bug. Now you copy your database again and you create the field again. What you discover is that table already exists in the staging environment that you're copying to. Why? Because when you do the second command and you say dump the source database to the destination database and you then create a new database in the destination. When you go copy again, the MySQL dump contains commands to delete all the tables and then recreate all the tables that are in the source. But if there's a table in the destination database that isn't in the source it's not gonna get deleted. You end up with an exact copy of all the data in your source database plus whatever else happened to be there in the destination. So that's why in the first line of this example you have to drop the whole database and recreate it or explicitly drop all the tables before you do a command like this. So this clearly is nothing any human would ever wanna type on the command line. It gets even more fun if your databases are on different servers. I show more detail here than for the files case where you SSH into one, you do the dump, you pipe it through gzip maybe, then you SSH into the other and you unzip it and then you dump it in. So it's not really that different. It's just more typing. Thankfully Drush makes this a lot simpler. One handy dandy little command. SQL sync is kind of like R sync. Behind the scenes all Drush is doing is it's SSH into one and running my SQL dump and then the other but it basically abstracts it all for you. And the Drush command I believe handles the multi-server case as well and it knows to drop the database and recreate the database and so obviously much, much easier than typing that horrible SSH statement. The way we do it on DevCloud not surprisingly is a drag and drop operation so you choose where you wanna copy from, you drag and drop it to whichever environment you want and guess what behind the scenes it SSH is in and runs in my SQL dump and drops the tables and does all the same things. And incidentally again you don't have to use the UI you can use SSH and Drush on DevCloud just like you can without it. Okay, moving right along. So we've talked about here's how you copy all these different 28 pieces of stuff around all of your different environments but we haven't talked about setting up the environments in the first place. So if you wanna have a development environment, a staging environment, a production environment generally those are three separate Apache virtual hosts that you've gotta create so let's assume your hosting provider gives you a Apache but you've gotta set up those different V hosts. You need to choose domain names to access them by, install your SSL cert, all the things you need to do to set up the virtual host, you've gotta do it three times. You have to make sure right all the PHP extensions are installed in my SQL so you need, whoops let me back up. Remember for one site in three environments you don't have one my SQL database, you have three. Separate username, separate password, separate database name because obviously you don't want your production site and your development site using the same database so you've gotta create three separate databases and then track all of those credentials. Obviously you need to have a code repository somewhere that you sit up, maybe you post your code to GitHub or some other hosted SVN sources and that's fine. If you do that and you want to take the step of having an automatic deployment as we did on DevCloud where when you commit to the master branch it's automatically deployed, then you have to write the integration, maybe GitHub has an API at calls when a commit occurs, different systems work differently. So GitHub will call your server when you make a commit but now you've gotta write the code that listens to that call and then goes and does the automated deploy if you want that. Obviously you have to back up and restore, you have to have a backup method and be able to restore things, you have to monitor the servers, make sure they're up, not crashed and they're performing well. This next one is really important and why the manual in the title is in quotes. After you've created all this stuff you have to know that you can recreate all this stuff in exactly the same way which means it really has to be automated. Marcus actually in his previous session talked about this too, he talked about using Puppet which is the tool we actually use in DevCloud. You need to know that when your server blows up which it will that you can get it back and that it looks exactly the same because if something changed, if you manually maintain your server environment you're gonna change something someday. Oh my goodness, something's broken, let's just install this extra little package and it'll solve the problem or tweak a config file or whatever and then in six months your server blows up. Did you remember to document that? Probably not. If you did, do you really wanna have to go manually make all those tweaks again? You really don't. So you need to automate it somehow and again I'm not gonna go into the details but it can be done and there's a lot more. I've given previous talks, David Strauss has given several past excellent talks that are linked to from various DrupalCon talks about how you set up servers to serve Drupal efficiently. It's not easy though there is a lot of information out there for people who wanna do it themselves. So that's the doing it yourself approach. Obviously in DevCloud we build the whole server for you and one of the things we let you do since Cloud Scale was in the talk description is you can resize them. So we just have this little environment, this little part of our UI that shows okay your production environment is running on it on serve six that's handling varnish and your database server and your web server and then in this example I've pressed to the reboot button already so that that little lower form opened up and there's a drop list and you can just say alright I'd like to change my server from its current size to a bigger size or a smaller size. If you're hosting at a different cloud provider or Rackspace or wherever you know a lot of them have this capability as well. You just choose what server size you want but again this only works if the building of your server is completely automated. If you switch for example from a 32 bit instance to a 64 bit instance you can't just reattach the disk. It doesn't work because you've got a different fundamental computing architecture. You need a new root partition so no matter what you end up having to rebuild your machine and if you wanna be able to rescale easily it's got to be automated. One little detail in this little pop-up it's showing the prices, those are monthly prices. DevCloud supports hourly, you can resize and we just bill it for the hours that you use. So that's how we handle servers. That's most of actually the technical content I have. I just wanted to say that a whole list of people helps me build this product at Aquia and the person I'm most interested now in talking to is you. If this sounds like fun implementing all this stuff please come talk to me, we are definitely hiring and questions. Right, so you have asked the Drupal content staging question. It has its own name because it's so frequent. The question was if you wanna do an update while your user generated content is coming in and then revert your update, what do you do? It's a hard problem, the Drupal community has come up with a variety of solutions to that. None of them are specific actually to anything I've talked about. They work with sort of any of the models for managing the environments. So there's a system called features, capital F features that lets you add and remove functionality like that. There's various content migration, sometimes the changes you wanna push aren't really new code changes, they're new content, a new about page, a new whatever, sort of a static page that's not user generated. So this isn't really the session for those. There's nothing about sort of managing and deploying the environments that's specific to that, which tools you use for your environments. They all work with all of them. So I thought I saw, oh, I guess over here. So the question is I showed using Drush SQL Sync to copy databases, what about running update.php or clearing the cache or the other tasks. So obviously I gave an abbreviated list of the things you need to do. No, Drush SQL Sync doesn't do any of that for you. So if you want to, you know, nice thing about having the Drush aliases is after you copy the database, if you wanna run, if you wanna clear the cache, then you run drush at test, CC all. Now your cache is clear. DevCloud can automate those tasks for you as well. So, but those are other things that you need to do. What makes, what gets extra interesting is when you're running on multiple servers and you need to push your code and the cache needs to get cleared, but you don't wanna clear the cache until you know the code has been updated everywhere. You know, if you run drush rsync and copy the code and it's one server, all right, great. You know what one server is done. If it's three or four servers, well, either you run, you know, you run drush rsync one at a time, but that means that now your multiple servers are running different versions of the code for the duration of copying it to all of them and then you clear your cache. It can get a little tricky. And that's, you know, those are the things we automate in DevCloud and in Managed Cloud, which is the multi-server version of the product. So yeah, I mean, I was just giving sort of the high-level overview, but there are those other steps you have to do. I couldn't fully hear you. Do it, yeah, okay. Someone else to ask and then I'll get to him as he, when he comes closer. Did I see another hand? Maybe not. Okay, can you shout louder? Wow, I just can't hear. This is why they gave me the portable mic. Okay, the best practice for deploying a hotfix on DevCloud. One moment. So, in that, actually I'll show it right here. In this little change to box that you can drop down, it lists all of your branches and all of your tags. So, if you have a hotfix and you've got a release branch, you can select, let's say, you can select the release branch in your staging environment. Now you're running the release branch there. You can commit your hotfix to that release branch. It will get deployed immediately. Maybe being near two microphones is a mistake. And then drag from staging to production and it will cut a new tag and deploy it. We do that all the time. In fact, just for a little self-referential mind bending, this whole user interface that I've been showing you is part of the Acquia Network. The Acquia Network is a system where we sell subscriptions for managing various aspects, like Drupal sites. The Acquia Network itself is a Drupal site. And not surprisingly, it's running on Acquia Managed Cloud, which is the system. So, the Acquia Network has a subscription for itself as a Drupal site and it has a cloud UI for itself. So, when we wanna deploy updates to the Acquia Network, we commit them to the current branch and then we go to the Acquia Network subscriptions workflow page and drag the staging environment to the production environment and basically the network updates itself using this exact thing. So, we use this all the time. And in fact, sometimes we push out three, four, five fixes a day. You drag from staging to production and you hit the button and you go to lunch. It's terrific. In fact, I've been a developer for 20 some odd years. I'm a total command line guy. I would never have imagined before that I wanted a user interface to push changes out to my website. Before I built this, I would use R-Sync, actually, just sync things up and I had a make file that automated it for me. But, what we have found is that you drag your branch to production. It makes the tag for you. It's so nice that the network engineering team uses it and we use it for all our internal sites. All right, who is the guy who came down so I could hear? Oh, that's an excellent idea. So, some modules create their cache files in the files directory and you have people that upload files. How do you manage that separated? How do you separate those files from each other? Oh, you're asking about if the site owners put files into the files directory. Yes. Don't do that. Really, I say that based on painful experience. The files directory with Drupal is for user uploaded files. If you start putting, like if you put theme files or whatever, your own content there, things start getting really ugly. Okay, so I'm gonna back off that a little. If, for example, what you wanna do is, we do this all the time, where we'll put a site on opia.com which has a case study. So we wanna attach a PDF file for it. So that's not, in a sense, that's not user generated content and if we were getting to this guy's question, if we were doing a content staging thing where we put up our case study with the attached PDF in the staging environment and then migrate that to production, then in that case, I would have a file because it was attached via the upload module. That would be in the files directory and that would need to get copied over. That would have to be the job of the content staging solution, whether that was the content migrate module or whatever. You can use R-Sync. So R-Sync, by default, when you copy from a source to a destination, it doesn't delete things in the destination that aren't present in the source. It's additive. So if you copy your production directory down to your staging environment, add a file and copy it back, what you end up with is you've just added, you've added a file to the production environment. In fact, even if you don't copy down to your production environment first, you add a file to staging and copy it up. R-Sync will just add the files. This works because Drupal almost never modifies anything in the files directory. It's pretty much right once. But you don't ever want to have, so some people try to put certain files like for their theme, they commit them to their code repository in the files directory. And that's what I meant by don't do that. The files directory has to be managed purely by Drupal or bad things happen. Because then you're right, there's no way to separate what's a Drupal managed file versus a code managed file in the files directory. If you really need to do that, then just have another directory where you put those files. It's not the Drupal managed files directory. Have it be statics or something. And you can put that in your code repo and commit your static PDFs or whatever it is you want. Does that answer the question? Sort of? Not really? Okay. My question is, support the different DevCloud servers that can sync all the data. I mean, in Europe, USA or Asia, if I have three DevCloud servers, these three servers can sync all the data. Between your accent and the echo, I could not understand a word you said. Try not using the microphone and talk a little louder and I might be able to hear you. My question is, this is a DevCloud server, right? Yes. If I have three different position DevCloud servers like Asia, Europe, and USA, and three different servers, I wanted to support the data between these three servers to synchronize. Can someone translate for me? I'm sorry, I can't understand. Okay. You have three different servers and three different continents. You want to sync the dates? Yes. In these three different position servers. I guess I'm not sure what you mean by sync the dates. But sync the... Oh, synchronize. Oh. Synchronize the date. Like the files or database. You want to sync the clocks on the servers. Yeah, yeah, yeah. There's a system called NTP that pretty much is the universal tool for that. It stands for Network Time Protocol. There's a package for every flavor of Linux. It runs NTPD, that's November Tango Papa Delta. And you give it a time server, which is, you know, there are various time servers running on the network and then all of the servers sync to that. And that's how we do it. That's pretty much how everyone that I know does it is using NTP. It's what kind of weights? I use the public network or the dedicated line to synchronize the date. Can someone, I apologize, I can't understand what you're saying. I mean, what kind of way you can to sync for the different servers date? Why don't you come talk to me afterwards? The echo is killing me. So, we'll talk afterwards without the echo. I'm sure I'll be able to understand you. Someone else? I'm not seeing any of their hands. I guess we're done a little early. What, is there more? Sorry, over there. I'm sorry, I just, it's not working. I'll get you next. Hold on, the microphone's over there. What do you do about table locking from the SQL databases? This is unbelievable. I am having the hardest time hearing. Speak slower, let's see if that helps. What are you doing about table locking? Table locking. When you import, so, right, the question is if you run a MySQL dump, then as it reads through each of your tables, it locks them and so for the duration of the MySQL dump, your site basically freezes. This is not a very happy thing. There's two different answers to that. If you're running on a single server, which most of the examples that I was giving were talking about that, your only real option for that is use InnoDB, which is a database engine for MySQL that supports row level locking and more intelligent read-only locking, and it works better. When you do the MySQL dump, you use dash dash single transaction, and it works better. It doesn't lock the whole table. So only thing, so, it takes a read lock. Obviously, honestly, I've been a little wondering about that for a while, why it doesn't freeze the database. I think it takes a read lock that still enables the site to keep working for the duration of, well, it opens a transaction, and so it works the same way transactions do. So the other transactions are able to make commits, but I assume they penned until the first transaction finishes. The summary is use InnoDB and make your dump with single transaction. It's still not perfect. You can still hit a situation where your dump is blocking certain tables. There's not a really great answer if you're using a single database. If you're in an environment where you're running multiple database servers, then you can make your backups off of a slave, which is one of the other alternatives. That's what we do in Managed Cloud. There is a third option, which there is a tool called ExtraDB, ExtraBackup, which I don't remember if it comes from Percona, or if it's, I think it comes from Percona, which is a MySQL consulting shop, that actually makes your database, makes a non-locking database dump off of the live database. It begins a transaction, it does non-blocking reads, it makes a copy, and then it goes and reads the bin log, the journal for any changes that have occurred since it started and updates those into the backup. So when you're done, instead of having a MySQL dump file, or a text file with all your SQL, you have an actual copy of the MySQL directory, which you can then point a MySQL server at and make a dump from that because no one's reading from it. It's an extra level of hassle. We used ExtraDbBackup or whatever it's called for a while, we found it didn't work that great, which was surprising because the guy who recommended it to us is excellent and has always known everything he's ever talked about before, but that tool didn't work very well for us. Maybe we were using an older version and it's better now. So if you are on a single server and you can't withstand the InnoDB MySQL dump, then ExtraBackup is the next best choice. Hello Drupal or hello Kitty Drupal. Sure, what he's talking about, Drush has a ton of features. Drush is an awesome tool, everyone should use it. We use it all the time. He's talking about with the Drush SQL dump. So I showed Drush SQL sync, which logs into one server, does a dump, logs into the other and loads it up. It's composed out of smaller components and one of them is the Drush SQL dump, which just gives you the MySQL dump and it has options, which I assume sync inherits, which lets you, first of all, you can select which tables or ignore certain tables, like if you don't wanna copy your search index from production to staging because it just takes too long, you can skip it. You can also say, don't give me any data, just give me the table structures and copy those across. But I'm not sure what, other than for a performance enhancement, what was, right, right. So you can ignore certain tables. The MySQL dump command has these options. It's, you know, dash, dash, ignore tables and you give it a regular expression and then Drush surfaces that same functionality. You can give it a list of tables and it passes it onto MySQL dump. And so it will either ignore them completely or what he is recommending is before you do your full copy, you can do a copy just of the table structures, right, just the create tables statements. Those are very fast. You do that for all your tables so that you end up with like an empty cache table and empty search index. And then you do a second copy where you ignore those tables completely so they don't get deleted in the destination but you don't copy the database either. So when you're done, you have all of your tables but only the ones you care about actually have data just to save time during the copy. Okay, oh, one more. Hey, is there any partial sync options like we have a database which is quite large, a lot of nodes and on our dev environments, we don't really need all the data just like the freshest amount of it. So is it possible to get like only the last 50K of nodes using DevCloud or how would you go and know about it? All I could make out was is it possible to get the last set of, I don't know, could someone, I apologize for the echo. Okay, okay, so the question was and it was much easier here without the microphone. Is it possible to copy only say the last 50K or the last some subset of tables? It's not an option DevCloud has and I don't know if it's an option MySQL Dump has. It's an interesting question, I haven't thought about it. It's an interesting request. I'll look at that. If there is an easy way to do it, then sure we can put it in because it does take a while to copy databases. Yes, so the question is about scrubbing your database. After you copy production to staging and now you wanna run a test but you don't want email to go out to all of your site visitors. Obviously doing it the manual ways, you have to start, you have to write your own scrub. You know what tables you have that have data that you wanna remove. So assume you've written an SQL script or something that does that. Obviously with the command line or Drushways you can just run that SQL statement. What we are going to be adding in DevCloud we have not yet is in your code repository next to where your Docker root is, next to where all your code is, we're gonna have a directory which is the, I'm not sure what we're gonna call it yet but it's like the deploy directory. And within that there will be a subdirectory named after each of your environments so DevStage and prod. And within that will be a list of scripts that when you do a deploy to an environment it will then run each of those scripts in order until one fails. Until one exits with non-zero status. So in your staging environment you could put your scrub script there. In your production environment maybe you sends you an email that says you just deployed to production or whatever you wanna do. That's been on our backlog for a little while. We're planning to do it before the end of the year. We just haven't gotten to it yet. We have a lot of things on the list. But it is coming. We need that internally. As I said we do, we have a staging and production environment for networks.acquia.com and we actually had a pre, before we even built cloud we had a system for scrubbing the databases and whatever. And our network engineering team is like hey when are you gonna get that done? Cause we'd really like to use it so. Coming soon. The question is if you've got a very busy live site and you have some new code and you wanna test the new code against your current database and I assume the reason you're saying busy live site is you don't wanna slow down your live site by making a backup in order to do this test. I'm sure, well let me, this will be, oh I seem to have, I'm not projecting anymore. Okay, then I can't give a demo because I'm not projecting. So with any of the solutions I demonstrated, you can run the SQL sync or you can use devcloud drag and drop to just copy. Okay, so this is another variation of the content staging problem. What you're asking is if you've made configuration changes in your site, in your staging environment and now you wanna integrate your live production code, your live production database and the changes that you've made on your staging site are like you've created a new CCK field or something, a new view. Is that your question? Right, right, right. I mean the real problem there is that you've got, you've got configuration changes in your beta site that maybe you put there by clicking in the UI and building up a new view or whatever. And again that's the Drupal content staging problem. We really need a solution for extracting configuration state from user generated content. That's one of Dries's main initiatives for Drupal 8. In the absence of a good solution for that, if you're not using the features module or the other various approaches to solving that problem, then what you were suggesting is you have your staging site, you copy the database down, then you push your code, you start by copying your database down, then you make your changes to it. If you're making your changes by hand, you're sort of in a world of hurt, but you then push your new code, you test it out. What you were then suggesting is, okay now I just swap which environment is which. That's actually an interesting idea. DevCloud doesn't support that. I didn't talk about that in my talk. It's changing the roles of your environments. There's no particular reason we don't support that. I just never come up. In our environment what you would do is copy your database down, push your code into staging, push your code, sorry, copy your database to staging, push your code to staging, test out your changes. Hopefully your changes are, any database changes are code driven either by features or update functions or whatever. Then push your code to live and make those changes. If the changes you're making are manually clicking on the UI, then the approach I just described means you gotta go manually click on the UI again in production which is a problem and why we need a content staging solution or configuration staging solution. Okay, I think I am out of time. So thank you very much.