 Alrighty, so we're going to go ahead and get started and final few people trickle in from the back, they'll trickle in from the back. Thank you all for coming to attend our session. You are Golden God. Automate your workflow for fun and profit. So quick show of hands in the audience just so we can get a sense of who all we're talking to. How many of you have already automated at least some part of your workflow? All right. This is my crew. How many of you would you say you've got more automation you want to do? Okay, cool. And if I could have the next slide, this is the feeling of automation. It's amazing, right? We call it you are a Golden God because it's just fucking awesome when you get robots to do your work for you and they eliminate the pain and the suffering and the drudgery of being a web developer and permit you to be the fullest flower of the genius that you are. This reality, I had never seen the movie Almost Famous when Josh put the animated GIF on our talk and I think that it does a fantastic job of capturing both the hubris and the delusion that helps us get started on our efforts for automation. You've probably seen the sex KCD if you are at all familiar with automation and I would really encourage you to heed the alt text and don't screw yourself. There are so many ways that automation can go wrong. And who among us has not actually had this exact thing happen? Like as an engineer, as a developer, I will cop right now to having spent a factor of two, maybe even a factor of ten more time on the like automation system than actually would have taken me to just do whatever the thing was that I was trying to automate in the first place. Have you ever just kind of looked back and seen that? Okay, cool. So we're all in this space, we can talk about this together, it's okay. And that is really what we're going to talk about here. We're here to talk about like the high highs and the low lows, like the real peaks and valleys that come with trying to like augment your human capabilities with that of machines. The Sorcerer's Apprentice is like a great metaphor for this. You like can chop the room in half and get it through the twice to work for you. But if it gets out of control or it's not well intentioned, you run into kind of a gray goo problem where the machines are running wild. And so we're really here to try to identify and talk about and help everyone conceive of this notion of what I like to call appropriate automation. And it starts kind of with agile methodology. It does. Yeah, if you read the write-up, a lot of what we're going to talk about today is rooted in our understanding of agile ideology. Now there are plenty of methodologies, XP, Scrum, Kanban, Scrumban, it's a thing. So there are those and there are all of the practices, retrospectives and user stories and velocity tracking that go along with agile. And we have what I like to think of as the development driven set the ABCDDs of agile, if you will. There's ATDD, BDD, CI, CD, BDD, TDD, all of these ways and all of these things that people believe very strongly in. And I think in the Drupal community and other places where I've done consulting and worked with teams that people experience it, it's a very loaded word with a lot of baggage. And they experience it a little more like my buddy Chris, but doesn't agile actually just mean twice as much done in half the time? Right? People tend to think it's just, oh, we're moving fast, more features faster and that's not what it's about. Yeah, it's kind of a trigger word like communism was in the 1950s. And like all good isms, it had a manifesto. It started here with the idea that we're uncovering better ways of developing software by doing it and helping others do it. And through this work, we've come to value individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation. And responding to change over following a plan. That is, while we value the items on the right, we value the items on the left more. So the context of this I think is really important. The agile manifesto was published in 2001. So that's, I was thinking about the keynote this morning that's a year after Drupal. IE 6 had not yet been released. You could still wear your shoes through airport security. And if you Googled for Gmail, you got something that looked like this. The different world. We're really young in web development and I think it's easy to forget the context that that manifesto was written in. So the way that we experience it, I think, at least the way that I experience people talking about agile in lots of shops is very different than the manifesto. So you're getting this thing where you're supposed to be about individuals and interactions over processes and tools. But instead it's like, oh my God, look at all the shiny tools. We've got Begrin, we've got Docker, we've got VHAT, we've got Krunt, we've got Chef, we've got Jenkins, we've got Adzible, we've got Travis, we've got what's up? Like, where, how did we get there? And working software over comprehensive documentation gets reinterpreted to say where agile documentation is against the rules. We don't have to write it. We're agile. I've heard that. There's a little post about that. The kind of documentation that the manifesto was pushing back against was documentation that looked like this. We're talking about massive amounts of documentation that needed to be written before you could ever start on a project that kept you from getting the kind of feedback that you needed about whether what you were doing was even a good idea. And in their own words, part of this context is that they were pushing back against an entire being blamed by management for failures that they perceived, developers perceived as managerial failures. So as they described every day, marketing or management or external customers, internal customers, and yes, even developers don't want to make the hard trade-offs. They don't want to make the decision so they impose irrational demands through the imposition of corporate power structures. And this isn't merely a software development problem, but it runs through Dilbertesque organizations. And I think it runs to an extent through open source, the more that we see people depending on open source for their bottom line, we start to see some of those irrational impositions of structure as well. So yeah, documentation, you're agile. You can document. It's okay. And the reaction against the structure often, oh, this is still you. Sorry. Customer collaboration over contract negotiation is the next step in this. So after you've done your big document and you said this is what we're going to do, we negotiate that contract and you're locked into it. And so they're saying we want to collaborate with our customers, and yet I don't see agile shops necessarily implementing this. Certainly there are exceptions, but I've had so many where it looks like this. And whether the bid is from internal resources or whether it's a client, we had to give this fixed estimate, this bid, or we wouldn't get the job or we wouldn't be able to do the things we needed to do, followed by scope creep, we don't want to collaborate with you. Not at all. Scope creep, stop. We're going to do what we wrote down at the same time that they're talking about being agile. And oftentimes, this reaction against the imposition of structures goes so far as to saying, like, we don't need to actually plan. It's like, man, fuck it, planning is boring, let's shred the code. And as our great war leader and president Dwight Eisenhower informed us, plans, in fact, are useless, but planning is essential. And it's actually kind of a wise way of looking at it, like you make a plan and then reality intrudes in your plan, inevitably must be to some extent chuffed or scrapped or rethought. But if you didn't take the time to get together with your team and think about the work that you were going to do together, the exercise of planning, you're just going to, like, agile yourself into some massive cluster. And yet, I have seen the lights, right? I have seen teams where every commit that they make to a repository is run against a battery of quick and rapid and valuable tests that inform developers if they have caused a regression in some other part of the code unintentionally. I have seen pre-deployment tests that actually functionally burn down and smoke test every single function of the site. They run unit tests. They run links. They run acceptance tests. They do cross-browser testing. They're all run by robots and that's amazing, right? That is the kind of thing that helps you get out of this world of fearing to deploy because you're flying blind, right? Automation and especially automation coupled with appropriate testing gives you a sense of telemetry on your projects, right? You get to know, you have sometimes an assurance what's going to happen before it goes into production. Like, who here has ever had something only happen in production, right? That is the worst feeling ever. Everybody ever wants to experience that again and this is the way we as an industry advance ourselves towards escaping that kind of trap, escaping that fear because fear, in the words of Frank Herbert and Dune, is the mind killer. It will destroy your brain and so you have to get away from it as much as possible. I think even beyond fear, automation can help us get out of the realm of our job where it feels like we're just digging a ditch. Sometimes web development is a bit like you're digging a ditch. You're just like doing the work, putting it in, cranking it out, getting the things done and if we didn't have computers to augment ourselves and make us more effective in doing what we do, I don't think many of us would actually be in this job. Like I started out my career as a kid just being like an HTML jockey and if I didn't have like grep and find her place and other things that would automate my manipulation of all these documents, I would have washed out immediately because the idea that you would like go in and literally change every character with a text editor, that is ditch digging and nobody wants to be a ditch digger and the worst of all possible worlds is sometimes when you have inappropriate automation or not very well done automation and it's supposed to keep you from flying blind, it's supposed to give you the sense of assurance and to prevent human error from crapping out in production but the automation feels like it's digging a ditch too, like you kind of have to fight with your automation system in order to get anything done and what happens then is people will start to circumvent the process, they'll go around the ditch or they'll like find excuses not to do it and you'll end up in the worst possible world where you put in the effort at some point, somebody put in the effort to set up an automation testing regime but then it was too hard to use so people went around it and then there was a bug in deployment and now everyone's angry. And part of this leads to a common perception of developers as daredevils as people who don't want to follow rules, who want to do what they want to do and not really play along with the system and I've gotten this both from shops and within the community that we want to make sure that we're careful because we don't want to put barriers to contribution or we don't want to stick QA between the developers and getting their code to places which I think is a little bit crazy and actually not like my experience working with developers, it's like this mythos that exists around it. I've been doing some narrative research collecting stories about people's experience collaborating and deploying with software and when asked after telling a story about a deployment, what would you change if you could go back and change something? Overwhelmingly, putting safeguards in place and planning better were the things that a mostly developer audience said that they would change if they could go back and do it over. And so while there's that daredevils mythos that I see, I feel like deployments actually feel a little bit more like this than the mountain bike jumping that you get ready to get that stuff to production and you could use a guardrail here. It would be okay. Nobody's going to complain if they can keep their truck on the road and they can keep going. So I'm going to talk about a couple of specific examples that I think are pretty informative from my direct experience. So I've worked with the development team at Tableau. Tableau is like a big software company and they make some amazing bad app analytics tools that you can run all kinds of big data through and they have like hundreds of engineers on staff working on their analytics suites. They also have a really sophisticated website that talks about like all the hundreds of thousands different ways that analytics and big data could help you and if you Google Tableau or big data plus almost anything, they've got a dedicated landing page for that and that actually takes you through a dedicated flow that will connect you to the right people and hopefully eventually cellular software. They have like a team of five people who just developers, real developers who work inside their marketing department and just work on that website and they were in that terrible flying blind place and Eric Peterson will put a link up to this in the Drupalcon LA page at the end. He had the great deck about what he called their continuous integration renaissance and they went from a place where they were only testing at the really seriously testing at the end of the sprint which meant that at the end of the sprint there were all the integration problems would crop up and everyone would get demoralized so they couldn't deploy. They didn't have an environment for testing that actually represented production so they would get tests that passed in their testing environment and then did not work the same way in production and they had like this eventually led the whole team to kind of feel like morale was at an all time low because they had like so many sprints like seemed to go well and then like crash and burn because they couldn't get it integrated or worse. They deployed stuff into production and they had a fire drill to roll it back or do a hot fix and over the course of about six months they put in the discipline to say we're going to start building robust tests. We're going to get environments to test in that represent production really well and we're going to start doing this automatically on every commit. They don't run all the tests on every commit but they run a battery of like the good tests on every commit and what that means is developers find out right away if there's an unintended consequence and before they go live they do like a really really full battery of tests and they really manage to cut down on almost completely eliminating wonky things happening only in production. They had to get up some stuff for that one of the things they had to give up was the ability for someone with Drupal administrative interface experience to like quick fix something in the live environment right that was something originally that attracted them to Drupal because it's very empowering to be like oh yeah the left side just needs this extra sidebar click click I just did it fulfill that request and that's really cool except at this scale where you have a really complex site there's a lot of things going on that like click click solves the problem on this page kind of created a problem on another page and so they had to kind of sacrifice the Drupal's like a immediate gratification quick fix capability but because they were able to put all of this automation in line they were able to get deploys going quick enough that they feel like at the end it was like really worth the trade off and they don't want to go back so that is a place that you can realistically get to if you want another example I like is from my my homeboys at Gizra they're a really high quality shop they do a lot of interesting stuff with Drupal and one of the most interesting things I think they do is they follow what they call quote the Gizra way and there's a couple good blog posts on this in a video where Amitai kind of like rants about it for 45 minutes you can watch but what it boils down to is they were fed up with what I like to call the Jenga school of web development I think we've all been here at one point or another it's where like you're working on the site and you're getting it all together and it's like perfect and now we're just never gonna touch it again and then it's like oh what do I do I got to rebuild that and that the process of actually kind of like you know it starts with as Chris mentioned this keynote like where do you start you start out live coding on Drupal the word right and that is the epitome of Jenga style development because you have no reputable process to get to where you were you're really you're creating something that's kind of inherently unstable through a series of nondeterministic steps so what they do is they actually build everything up to be automated and built in an automated fashion from the beginning starts with a make file goes with an installation profile whatever content they need to migrate there's a script for that so that when they sit down to start work on any given Sunday they can blow away their entire environment run one command and in five minutes have the site built up to the point where it's currently built in that environment and so what it does to their workflow is at the front end of a project it's slower and more expensive but then as they start to show things to the client and as the client has a Tiffany's seeing a running website and wants to make changes they're in a much better position to adjust because they don't have to suddenly reverse engineer whatever got them to where they were they can kind of take their recipe back a few steps and head off in a new direction it's a very powerful way to think about building like actually the work of development not just testing and deployment can be augmented through automation in ways that are very effective and lastly of course there are cautionary tales I've seen a project that I worked on earlier in my career where you know was a big publishing project and there were like four shops involved in a big internal team so there's like 15 almost 20 people working on this project and that was creating like workflow problems and it was like we're gonna get out of this with testing and we're gonna have this huge like push around testing and I think it's partly because the tools in the ecosystem were just mature at the time we didn't have the right like methods for doing really truly effective testing that we'll talk about in a bit but I think in retrospect that more people hours were spent building this like kind of like overbuilt but also at the same time kind of rickety test system and then keeping it the care and feeding of the tests themselves became a bigger engineering effort than the actual website there's kind of like the way in which you can have the automation boondoggle take on a life of its own which is obviously going to want to avoid that's why I would have to emphasize what I call appropriate so there are so many things that can go wrong automating and Josh is talking about big system automation but at its very the core of Drupal we have a simple automation thing that we take for granted I think called cron and in the process of collecting those stories I was reminded about the role that cron plays in Drupal development there was this fantastic story about a shop that had built a site for a client that had been in production for six months and they were getting ready to bring it back and introduce some new features so they got the they deployed the code to the test server and they brought the production database back and they were testing their work and they went home on Friday and this is a site with a lot of user interaction blogs and forums and organic groups and when they came in to work on on Monday they got the e-mail that the server had sent out 23 million e-mails over the weekend all of which pointed to the test server because it hadn't been running on production very simple very easy thing to miss that's one of the places where there are safeguards and things that matter and I think this brings up something really important about automation that we're going to talk more about as we build on this which is automation is great but automation also just does what you tell it to and it doesn't account for meaning and value so I question the value of those 23 million e-mails a website that was in production for six months with 50,000 users 23 million e-mails and no one noticed probably means they did not want those e-mails and so we have this built-in automation with the thing that we click together with Jenga and we say yeah sure we're going to tell everybody about everything and we are not actually focusing on the value of what we're automating and what it means to the people that we're interacting with so all of the things that we can automate those notifications maybe not the best idea and so in getting started let's just think about what matters and what drives value as well as lightning the load of repetitive tasks both of those matter but I really like to focus on the value piece of it so there are things you can do you can have documentation that will be automatically compiled from comments in your code the automated notifications representative environments unit tests all of these things exist and you can move from manual testing to recorded browser tests to abstracted browser tests all of this choice so where are you going to start and what are you going to do and I think Drupal is in an interesting place here because received wisdom for frameworks and people who are doing agile development have you start with unit testing with code that you can test so that developers know that it's doing what they think it should be doing but Drupal doesn't do that the promise of Drupal and at least the promise for me when I started and I think for a lot of people who are new to Drupal is that you can take things and you can piece them together without a lot of developer intervention and the value doesn't actually come or live in the business layer necessarily of something that you've coded and tested it lives in these interacting modules and so you move up from the unit testing and integration testing and ideally you're going to have end-to-end test things that might be I'm going to talk about behavior driven development and be hat if you haven't heard of it before it's a way of describing in plain language what a website supposed to do and that's all about discovering meaning and value and that's something you're supposed to be doing with your business development team but what happens and what I did when I first saw it is we were so desperate to be able to automate some of that end-to-end testing is the only way we could know that the site was going to be right that I ended up with tests that look more like this one so I wanted to describe it in language with these pre-built steps and be able to capture that clicked together value that I didn't know how to test any other way so I have this like basic feature that I'm describing that I want to contribute content so as an author I need to log in and the first scenario is to be successful and given a user named Virginia and the author and you just don't even want to read that like the point this is to foster this language is to foster conversation it doesn't do that it's not serving its main most valuable purpose but I still did it and I still found use in it I think it serves as an exoskeleton in a way for a Drupal site to help hold it up while you're figuring out better ways of doing things but eventually you end up molting that test suite that exoskeleton has to change in order for you to grow with what you learn and what you can do better and I think this comes up to the question of like what is truly valuable testing for websites and you know their unit tests and core and unit tests and contribute modules which are highly effective if you're looking to refactor core or make a patch to a contributed module they will tell you almost nothing about whether the website you are assembling from the composition of all those is working the way you intended or not you have to move up to some kind of outside in behavioral driven testing for this and that the challenge sometimes is as we are early in our learnings about this as a community those types of tests that you said like you don't even want to read it do you imagine a spec or a test suite for a site it could include hundreds of tests that are written out like that and when I look at those things it reminds me of earlier times in my career when someone would say like here's the photoshop document now just make the web page you go go get it go to make it look not to see if it's a little bit off you got to make it like the photoshop dot everybody's probably had that experience before and it's not to say that having the photoshop dot wasn't actually a necessary part of the process because without some idea of where you're going you're gonna have a hard time building it but as a specification for a website kind of didactic deterministic sort of dictatorial specifications like that or like a series of tests that list out list out every single step and kind of affinine detail are fundamentally maybe they're useful as a kind of appointing inspection exoskeleton they're unhelpful to the process they don't really help us in the long haul improve our lives and make things better you want we want our automation we want our testing to be like in the actual skeleton a backbone for what we do so that as we invest in building systems for automation no matter what kind of life the website takes on as it's built around it it retains love its validity and you know you'll probably have to throw some things out recycle some stuff but the sort of first generation behavioral testing which is very exoskeleton ish is feeling is something that we maybe they all have to go through we should try to think about how to move beyond yeah and it happens the most when you're trying to introduce some kind of end to end testing to an existing site because you're not doing discovery at that point if anything you're you're back filling documentation and so it's so easy to do but a scenario like that is probably going to read more like given I'm logged in and you're going to focus on what the authors are contributing and what the value of the website is and that's just going to become a small step in the process and something that isn't actually locked to the user experience directly so I've been really interested lately by some work coming out of the jobs to be done people especially Alan Clement rethinking this idea of the user story in in behat that format is and and for agile forever it's one of two ways of phrasing it but it's in order to do something of value as a particular role or persona I need to be able to accomplish something and for me that's always that felt a little bit like ill fitting clothes it made a lot of sense and this is one of the points that they make when we didn't have easy access to our users and we didn't have the opportunity to be networked and to talk to them about the experiences they were having that was really important for developers and and this came out in Dries's keynote actually to remember there were actually people using the product that you were building as a software engineer but they don't really capture everything that you need to know if you're creating value so these books would say that the persona is irrelevant and what they want to do we make a lot of assumptions at that level we don't know that it's necessarily the best action so the unexpected outcome yeah there's probably that and so that piece they save but they redefine this with the idea that we need to pay attention to users motivations so for example I was having a conversation with someone yesterday who was talking about how they do on college an oncology website and the people who are using their services are not just one person finding out information their physicians their caregivers and their people who have been diagnosed and even someone who's been diagnosed with cancer is a very different place when they first receive that diagnosis when they start their treatment and when they're in their treatment and that the persona captures none of this so they suggest that you focus more on situation what are the people feeling and what is their motivation so what this does if that's what your feature says instead of in order to create content I want to log in we focus on what people are trying to communicate and where that value level comes in then when when we're making the hard trade-offs about what we do we have a guideline that's meaningful to make those hard decisions by we understand what the impact of the application that we're building is on the people who are using it so there's a lot more emotion and motivation about what's supposed to be happening for certain feature so those wireframes that josh was talking about you might get something like this and we will put the links to the blog post because this also is some of Alan's work you have this plain old wireframe and you might have received it with some documentation that said what it was supposed to do and you might not have there might just be all this implicit functionality that wasn't in the bid that you're now expected to implement because somebody filled up some space on the screen to make it look balanced so you have these things and you don't have that context the idea is to provide context and the jobs to be done people would ask you to look at your interface more like this so in his example everything here is about what these elements are doing for their users so why do we have your sales rep at the top well you want to remind the customer that Joey is their sales rep because they have anxiety potentially about physically being with the sales person and and not being right there and all of those elements have reasons so that when you can't decide what to do at the theming level or you can't decide what to do about how you implement it you can think about the state of the user and what it is they're supposed to get out of it and that will help you drive decisions and decide if the cost of doing something more custom is worthwhile or maybe not as important so let's talk about actually what we do when we automate like if we want to get down to brass tacks and instead of have the rubber meet the road like what are the things that we absolutely need to do that are going to provide real value for our teams for our processes as part of implementing or enhancing our existing automation suite yeah and I want to suggest and there's definitely research out there to support this and experience perhaps anecdotally a lot of it in the room that it's very difficult to succeed at at really the kind of automation that Josh was describing and even some of the the more modest efforts if you aren't working as an organization so if a small group of people decide that it's a really great idea to stop everyone from committing until the test pass you're messing with their workflow and that actually has ripple effects all the way through the organization in addition like what we saw with the gizero way slide if you're spending time on that automation up front you're seeing those higher costs you need the support of business units and people customers in order to understand why that cost is acceptable and what it's going to get you so deciding to be enthusiastic and say yes I can do this by yourself is a risky endeavor and that I think it's easy to forget that I know I've forgotten that at times to lay that groundwork and to make sure that what's happening is what we we want to be doing yeah that was on the same page but so assuming you can get your buy in the next thing you do is make sure that your version control is on straight and everybody should be using version control and if you're not you are bad and you should feel bad and the reason for this is not just that you have the likelihood to roll back and keep track of things and that's how we all do stuff as professionals reason is that version control creates a system of checkpoints around your work and that is actually the fundamental backbone for any meaningful automation system anything you're going to do that's going to do cool stuff with automation is going to be linked to triggered by keying off of and making use of version control so if you've been putting it off I don't think any people in the room have been but if you're watching this online and you haven't really gotten version control yet this is another reason to do it it's not that painful you won't ever want to go back and and then you know the next time someone gives a talk like that you can kind of smile smugly to yourself that you're doing it correctly well to be fair it it might be that painful if you came in and you came in with the idea that you're going to click the things together and you're suddenly confronted with git and the command line but I maintain the pain is worth it so the next thing you'll need in order to automate is something to orchestrate your automation I see people get started with things early on and they kind of try to have the website automate itself in terms of the development process and this is always a bad idea you don't want to go into like automation inception where a system is trying to affect itself in some way while it's running that's going to inevitably lead to all kinds of weird edge cases and be inherently fragile there are a lot of wonderful tools out there for doing orchestration around automation the undisputed heavyweight champion on the open source side is a tool called Jenkins it's a wonderful Java project that you can install and set up and if you are looking at your feet wet with this type of stuff there are also a number of free services that will do it and I'll actually do it in perpetuity for open source projects so you can use travis you can use circle ci you can use worker there's probably like four or five more because it's a hot space and a lot of things are popping up all the time but the point is you need to make a choice about who is the kind of this dude over there you've got the version control but somebody's going to have to like wave the wand around and conduct the orchestration of automation and you shouldn't like I wouldn't torture yourself with this choice too much it can become a bit of an imponderable just pick something that seems good time box your your your research to an hour or so and then move forward and be ready to change in the future because a lot of what you built will be reusable and lastly it's up to you it's up to you to do the automation the automation doesn't automate itself yet it's actually pretty dumb all the things that will do are not very inherently intelligent you will have to think about what you want what steps in your workflow you want to make automated which things you want to happen on the basis of script and if you're going to be doing testing useful testing for websites which our example does you need to think about what's worth testing and why we'll talk about that a little bit more but basically pick your get your version control ready pick your orchestrator and then kind of dive in and you got to do some development work you should make clear some time to do this clear some time to learn a bit about it it'll be time well spent I promise so we have a non-live demo that I will step through an example of how you might actually do workflow automation and we're going to start with a pull request I use github as my example because github is freely available to everyone and it makes using git pretty easy they have a nice simple workflow I see a lot of people out there who here's familiar with git flow right who here actually uses git flow surprising number of people I find git flow to be like brokely over complicated for website development and it's because it was developed by people who kind of release binary applications and you need to support multiple versions there's a lot of stuff in there that's like you don't have multiple versions of your website running at once that you need to support and the the the basic git github workflow of create a branch work on the branch review the branch and merge the branch is actually what I think 99 percent of websites really need you can go with that kind of simpler less easy less less likely to screw yourself up kind of workflow and it gets the job done and in here we've got a travis ci implemented so that as I open my pull request it is says oh you would think you should merge this back to master let's see about that and it's going to go ahead and spin that up automatically and it'll run it orchestrates the running of all the tests and in this case I'm actually testing the website isn't run on travis travis is just orchestrating the the implementation of the representative environment and that we know is very close to the production environment because that's not something travis will do for you but it can orchestrate everything for you and then trigger a battery of tests against it and if you go to the next slide yeah and so in this case I'm using the be hat tool suite to do a very simple test and it's just checking to make sure that the homepage still loads and has the right content on it and so this is the type of thing where you could add you know five or six of these and it would take you a matter of you know an hour or so the repo where I made all this is open and you can fork it and hack it and kind of get your own version going and the point is that you don't need to start with automation as like a moonshot kind of apollo style project you can start with something that tests the four or five most important things on your website and maybe does a little bit of linting and maybe does some like other static analysis and just like let it run in the background every time you make a pull request and then I guarantee you it's in the first week it's going to come back with like whoops something failed and you just saved yourself like more time than you probably spent setting that up because you won't have to go back and reload the process of where you were in the project at the time you'll know right then and there that something was wrong and you have to fix it so this is the the structure of a behat test in the written in the the gherkin syntax which is kind of like you know the homepage title given that I'm on the homepage and I should see the homepage title and this is kind of an example of of how the syntax works and there's value in this a little bit I think because this is inherently very hackable right you know you're not really like capturing any business value in this type of language as Melissa was talking about but if you have some tests in your repo like this it's super super easy for something to just copy it paste it and write another scenario that's also important and it can be a way to introduce this notion of testing you know you're gonna tend yourself towards that exoskeleton style but it's a way to get started with a low barrier to entry and you don't need to do that though as a developer behavioral testing is is actually that that all that english language layer is an additional library on top of the core browser extradition library which is called mink and this lets you just write tests directly in code kind of similar to how you might write a unit test except it's designed to be a behavioral test and so for instance in this case it's like saying hey let's go ahead and like make sure that we can make nodes and that's cool and if I had a website with 10 different node types I could rather than having 10 instances of that have an array and iterate through 10 things and then test my 10 nodes really easily as a developer to create that it gives you the ability to kind of rapidly create those sort of point inspection style behavioral tests that can provide value and kind of help you avoid regressions when you know you're working over here with this config module but oh we did a hook menu that you weren't expecting and something else broken that you weren't actually looking directly at the other thing that you can do with mink that's really valuable yeah this is good is you can create your own behavioral definitions so gherkin I use the pickled gherkin because any picture of a pickle on its own is just inappropriate to put in the slide um you can create your own gherkin definitions which means that you can describe behavior that is of value to your website and people can write and use those like your own verbs your own scenarios and that can actually have very complex um uh definitions in mink as to what all that means so this idea of scenario I'm a logged in user could actually have a whole bunch of steps in there in mink but you don't burden the kind of front end text written written sort of facing side of things with all the laborious didactic definition like that uh like that example that Melissa showed before and in fact the people who originally developed this style of testing and the syntaxes comes out of the ruby community project called cucumber they four years ago they've been at this for a while they decided to unilaterally and somewhat controversially remove all of the default step definitions from their library the library is called cucumber and they said uh what we're seeing is people are just writing these kind of stupid tests that don't really mean anything like as a p person who wants to write a page i need to write a page so that there's a page that's like it's just the most the least that's there's no meaning in that that's like completely senseless and so they said what we're going to do is we're going to take away all of the things from the library that have that as I'm on this page I should see all these kind of inbuilt step definitions that have people have been using to create these somewhat useless tests and like they're still available as another library that they called training wheels but their point was if that's what you want to test don't write it in English because it doesn't matter just write a quick like lower level code test think about what the things that you would write it would want to write in English and it's probably a small number of tests that might correspond with conversations you've had with your stakeholders your clients about what the freaking purpose of the website is in the first place and then think about how you can write code that represents those things and you're building a much more flexible enduring value test system that you're able to continue using for a long time to come yeah if anyone's using the be hat extension the Drupal extension to be hat the the Drupal driver and the ability to interact with your Drupal site using the API as part of this whole framework was separated from the core extension so that it exists on its own and it's actually possible for you to use the browser abstraction layer mink without using Gherkin at all and the rule of thumb for when that's appropriate would be when you're a developer and you're checking on tests for yourself which is absolutely valid there are all sorts of things especially with that unique position or not unique but that particular position that Drupal is in whereas the developer you do want to see that some things are working so that you have confidence but that's not a conversation that you're going to have with a stakeholder and that Gherkin is there really about having stakeholder conversations and creating meaning so if you mix those two together you can deter from that portion of the process that behavior driven development has and developers may choose to use those pre-built steps and go ahead and keep writing them because it's faster but they may also prefer just to write the code because it's a thing they need and they don't need the distraction of trying to figure out how to make something seem like it's meaningful from a client perspective when that's not why they're writing the test yeah you can just make work for yourself if you're doing that so the point is you want to use mink as a browser abstraction layer because it lets you do a bunch of other fun things later but you don't need to use the language you can just write unit PHP unit style invoked tests that use mink to test the websites from the outside in which is again probably the only meaningful way to test a custom website build so that's right this is awesome because now I know that my pull request to add the new login logic didn't break the homepage and I can feel much more like empowered about doing this type of development and as a developer I feel like you know I'm like I'm getting out of this world flying blind I'm kind of getting my swagger back a little bit because if you have a good set of tests you can start like really going out there and getting after it like you could take really aggressive changes and you can try to refactor whole systems and you're gonna get back into being like you know people say cowboy coding is wrong but like kind of like you could be a cowboy and like go out there and run them down and rustle them up and squash those bugs and why actually that's a good question why is it always cowboys and gods and stuff yeah that was part of as we were putting this together I was thinking about the imagery that goes with developers and I I do think that part of it is just that self perpetuating idea that developers are men and so we have a lot of male imagery about that but I also started to think about those Dilbertask organizations and about communication dysfunction and I wonder if that imagery does not also come from a desire to be in control of things to be able to make things certain that other people change and with cowboy imagery maybe just a little bit of giving up that idea of I don't need to be with the team I'm gonna take my horse I'm gonna go I'm gonna camp I'm gonna not necessarily have to deal with that and I think that metaphor is really important in how we think about the work that we do and I love the idea of automation being mechanical I think that the mechanical movement of code and configuration and database and server settings is great but I do think that we do a disservice when we think about the meaning and the creation of meaning in that same way so I am perfectly content to love a mechanical automation metaphor at the same time that websites that I work on are often much more like gardens that require planning you have to decide where you plant your things if you live some place with winter when the snow comes that deadline that one was real right and so I think there's a mismatch too between people who get those metaphors confused the thing that helps me the most about that garden metaphor is it reminds me that I can't make a plant grow not even at gunpoint so when I'm working on a project even though often it's my responsibility to see that things turn out on time what I need to do is create an environment that supports the growth or the production of the things that are supposed to happen and I can't force a developer to write code on time I can't force the client to give me the requirements when we thought we needed them so what I need to do with automation to support me and that whole mechanical backbone is to work on creating that more complex environment at the same time this stuff is just so freaking cool I get really excited about this stuff and I think that the you all should be really excited about this stuff and I hope that you are really excited about this stuff because that enthusiasm the energy the hubris the delusion of thinking that we can actually make our lives better and make our work better is inherently necessary to advance our art and to advance our craft and this is a final example why you want to use that that mink thing rather than like curl or like the built-in swing test like what's the most ditch digging thing you could ever do for testing cross browser regression testing right that is the most like soul crushing occupation and I feel like it's a blight on our the morality of our craft that we have forced human beings to do this in the past because don't really think about it like what is in our minds that's then just like loading up different browsers and like having the two screens and then like writing the email and like doing the screenshot with this like that's that is like this beneath us as human beings we should not do that work that is not what we are designed to do we are we are people and so you don't need to you've got a machine to do that for you you hook mink up to like a selenium grader of browser stack and like the same test that you wrote that are running on your local host that do this thing on every commit it can run across 70 different browsers they can test it in mobile they can test it in operate contested in like obscure versions of ie6 which never worked and you can decide whether you care or not and that's where you want to get to and and hopefully that's the idea of what appropriate automation can do for your project and thank you guys for listening hopefully there's been some food for thought this is melissa she works at tag one and project managers and facilitates and does a bunch of amazing things and this is josh conig so he's a co-founder of pantheon and does all kinds of amazing things as well so you're going to be speaking again today is that right yeah about headless stuff but um we just wanted to thank you for coming hopefully this was at least intriguing and gets the bubbles percolating in your head and we're happy to answer any questions if you have any specifics you're interested in or technical stuff i can try to answer but thank you all for attending thank you is it on so i've got a question about um the evolution of testing with like e-hectoring for instance because e-hectoring really broke a lot of the existing e-hect stuff and i mean i've even seen that like the Drupal community hasn't really quite caught up yet we had work around it for some of the weird drooling things like pod ortho police um that worked in e-hect too that i have really haven't seen the community come back and fix yet and i think that shows real stumbling block for getting widespread adoption is the way Drupal does one thing seems to be more complicated than the average psychos but pod ortho is a perfect example it looks really simple from the outside testing that is really hard can you talk a little bit about those kinds of things and getting that widespread adoption to start a little bit so i know that we did some work at bad camp with getting um the Drupal extension to be had to be more approachable but i don't know personally about what's happening if you can find jonathan if he's here he would be a terrific person to talk to you about how some of those changes have impacted things jonathan answer the other thing i'll say on this having spent of like an unfortunate number of hours going back and forth with various documentations and trying to figure out which was for which version of b-hat i think b-hat actually has this problem because they chose to do like a pretty drastic refactor there's like i think there are probably still more people out there using b-hat 2.5 than there are using b-hat 3 so it's kind of like we're in this larger ecosystem challenge but i will say that like the Drupal extension is a good example and we try to do more with like other working proofs of concept of how to do these things and that that is the kind of the challenge for the community to come together on a set of easily replicable standard ways of solving our common problems rather than reinventing the wheel but we are i think unfortunately it's early days for us and so the wheel the first wheel is still probably being finished and that the the challenge is will we recognize it as such when it's done and not start a second one right away oh yeah absolutely and one thing i would add and this isn't specifically with autocompletes and testing those but it is part of the rationale behind the b-hat move to 3.0 was that there were several anti-patterns that had been introduced things like chaining the English language step definitions that they wanted to remove although you can still do that and you can kind of training wheels pull that back in that a lot of the the pain of that change is actually forward-looking but yeah the lag there is definitely worth acknowledging do you want to come up to the microphone so that way everyone will be able to hear so that's uh in in my slide that's part of the you know the kid pointing at you so typically the way that'll work is if you want to do a really full regression tests and you've got the tests to do that what you're going to need to do is get your data and if possible all of it even though if you have a lot of data this might take some time from production into your representative testing environment and then you'll need to run depending on how you're doing the Drupal development some people will build all of those steps into like an update that php where there's a you know you can right have a custom module and like update every release comes with an update that kind of will go through the steps or maybe you are really heavily into features and you just do a drush f you and and run that and then trigger tests afterwards it sort of depends on exactly what you're building and I will say that if you're you know just getting started and still early days there's there's not there's no shame in necessarily like having a small amount of manual work to allow an automated test to take place if you don't have all that scripted and but you can take two minutes to set it up and then let the tests run afterwards that's still better than doing all the tests by hand does that sort of answer your question yeah thank you sure quick question about um you're talking about get flow because we just finished the project where we used a full request for closure and development process and it worked great and we were just doing you know daily or sunny daily bills um off the master and now that we're in production and I work in an organization that sort of prides enterprise thinking quote unquote so we have to do a heap of maintenance and we need to kind of plan our releases and it may not seem to know it's just a website so we switched to get flow because we felt like we needed the freedom to be able to keep uh merging progress on the development while we've already sort of frozen what we're going to release you know two or three days hence is going to be going to be days to take the QA with the patient screening and all that kind of stuff so I'm just curious if you see that a valid reason for making that switch or if there was some way and I'm just not no no no so I'm yeah um uh I'm being a bit provocative when I said about get flow and what you're doing sounds like it makes a ton of sense to me I'll also say that what you're doing is about 40 percent of what get flow tells you you need to do right get flow will explain that you should have actually four concurrent layers of work going on at the same time which is I just don't actually really see website projects meeting all that and so it sounds like what you're using like the appropriate amount of branching and so forth and what I the reason why I make that comment and productively about get flow is I've seen projects go off the rails because they thought they needed to do all of this stuff every time and then their workflow becomes very convoluted and the workflow project turns into a boondoggle that's taking up more time than the actual website so no I think you're doing exactly the right thing yeah exactly so how do you start how do you start from a mortal and become a golden god I mean you might have some I actually would like to have Melissa answer that first because I do think that the the social and cultural component of getting this right unless it's just you yourself the developer you got to get that right first otherwise you're inevitably going to run into problems yeah so getting started with with the testing can begin at that unit test level where you're running it as a developer and you don't even have it as part of your infrastructure Drupal being not super unit testable until hopefully 8th right and make that level challenging so getting set up even as a developer locally before you integrate into any kind of system the checks that you can do is actually a really great way to get that very first piece started and when we first introduced it when I was working in a company a few years ago now the developers were actually really excited so they had not had a QA that was one of the places where QA got in the way of developers being good at their stuff so they were supposed to take care of it themselves because that always works and it didn't really work and there was a lot of tension between people who were demoing to clients and products that weren't finished with the introduction of a tool that the developers felt comfortable with that would actually work against the things that they were doing even if it was that sort of brittle exoskeleton test it actually let them take responsibility for that work and they were really excited and that spread to an organizational level where they decided to go further and say all right well they were using Drushmake files and they had other kinds of automation and so they brought those together if that at all gets that I mean that's an example but having the conversations about what is the cost to your organization of the regressions that you're experiencing what's the cost of current difficulties and what problem are you actually trying to solve and how will you be able to tell if you solved it or the human side of saying okay we do have this problem we want to make it better let's try this and being willing to treat it as experimental as part of the organization in order to get started as well and how you're going to really implement it I think depends very much on what problem it is you're trying to solve with the automation is it dig digging is it regression like where is the pain point for the work that you're generally doing and if you can approach it from that aspect you can get by in and that's going to tell you which tools you should focus on and how to go from there I think that from a technical standpoint you just like come up with a list of all the things you could test and stack rank them based on how much work they are and how much value they provide and like one of the easiest things you can do even before you get into like behavioral testing if you're not automatically linting your code after like that's like one of the quickest things you can do like a lot of people have that built into they they're using IDEs there's some amount of that but like there's Drupal specific linking you can do with like the coder module and other things and just like getting the immediate feedback from a system that I didn't have to set up and run after I was doing something and made a commit that told me oh you know I actually misspelled this thing and and I wouldn't have caught that until something else ran like things like that that actually starts to you get the developers on board with the process because it's saving them time and it's providing them value the that's the way that you can kind of build the momentum that'll eventually get you to a place where you feel very confident about your deployments and you don't but it will take time right and so the point is that it's it's a riskier thing if you try to do it as this kind of like a moonshot project where it's like we're going to do a six month project to do testing it should be more like we're going to do a one week project to do some testing and then that's going to be good and then we'll do another one week project to do some more testing and over six months of doing this and feeling good about it every step of the way we'll work our way up the ladder to being very confident it's a process man and if you're starting in the middle right you're going to come full circle and the less you if you can do that full circle especially with with the the b-hat type testing where you're focusing on the stakeholder conversations if you start with getting some assurances in place and you come full circle you're going to see your test at every phase of that cycle in a different light and that's something I didn't say that I kind of wish I had is that it's really important to realize that this automation the failures that you run into the most important thing about those failures is getting the right feedback to the right people with the power to fix the problem and if your test is failing because there are new business requirements and your code is fine and you keep getting information about that or you're testing values that are supposed to be changed in the interface and you're a developer and you're getting that feedback you're not really going to be interested in it and so another part of successfully automating is making sure that the right person knows and that you can make it move more swiftly rather than giving noise to people who are trying to do their jobs maybe I have time for one more question and then I hear the music next door I'll try it means we have to release you all go ahead so my question is about third party and other services how do you do a unit testing on a lot of our code thoughts you know third party is there a stable environment on that side? yeah this is a big challenge especially where you know the representative environment for testing is really key because if you don't get that good representative environment then your testing is always going to be you know kind of sort of accurate and when you have third party services and they don't have things like a sandbox or something that you can actually test against you have to get creative so one of the things that if you're so you're talking about like especially like an API implementation or something like that one of the things that the techniques that are that are best for this and the best services out there will provide some of this for you is you essentially mock the remote API and what that allows you to do is test it much faster because it actually happens in real time rather than over in the network and you don't actually have to do or change anything we've done a bunch of this internally and it's it's a little bit of a process but it ends up working out now what that does mean is that your tests assume that the remote service is working and it hasn't changed that's something you have to kind of like find another way to manage but you can still do all those pieces you just have to create that that's the technical term you want to mock for that so essentially it's going to talk to something that's not really that API but it's going to respond the API would have with that data and that lets you test the full life cycle of the code that you're looking to implement I think that's all the time you got I mean we can hang out and chat more later but we'll release the room thank you again for coming and hopefully you appropriately automate