 Dwi'n gwneudon ni, Nathan Lisgow. Rhaid i mi fynd i'rai Twitter aliaeth, mae'r IG tag. Gofa hynny'r tala, nhw'n yn dwi'n gwybodaeth oedd yn drwng. Felly ble efallai i'r gweithio i fynd i'r mewn ei ffaith. Mae'r ffordd o mynd i'r gweithio, ac mae'n meddwl yr oedol o'r byd. Mae'n gweithio i'r gweithio i fynd i'r fally ymddangos a'r cyfreiddiadau. Felly mae'r bwysig yn dewid oנוodd Oedden dweud gyda 2008. Yn yw'r 4th gwirio ar gyfer IAC. gyda'r chynyddiad y Drupal Cymru. Rydyn ni'n fawr o'r cyfnodd cyllid y gallu cyfnodd yn ddod o gweithio ar gyfer y byd. Felly, Ieithio'n gyllid, fewn i'r hwnnw i'r Gweithgaredd hon i'r hwnnw, ar ddiwc yn 2006, ac rwyf wedyn yn ddweud o gweithio a'r syniad, ac rydyn ni'n gweithio'n gweithio'n gweithio, maen nhw'n ddweud bod yw'r gweithio'n gweithio'n gweithio'n gweithio. Well, Drupal hadn't arrived yet in my life, that came along in the year 2008, and it was a former colleague of mine, Matt Fielding, who suggested that we might use Drupal to meet the requirements of a project that had quite a unique feature set that we'd never delivered before. The decision to use Drupal or any framework or CMS is often prompted by a need to deliver a bunch of features that we've not had to deliver before. It's not that we're not willing to venture into the unknown. We've all done that before. I've been doing web development for 14 years, so you can be sure I've ventured into new ground, whether for me at least. It's that we acknowledge that the fact that someone has covered that ground before might actually benefit us, and it might actually produce a higher quality of product. The code and features that we use in Drupal Core, much of Contrib, are far more robust than the code or features that we could hope to create alone. The fact that this code is tested in the community and essentially in production across many sites is part of the reason why we can get excited about the Drupal project, and it's what distinguishes it from some other projects. Some of us may feel that we can't deliver much without Drupal, but it can be done. We use Drupal because it has a large and flexible feature set, and the way in which the code is distributed and maintained means that our site's features are being tested, fixed, and improved to a greater extent than our own use, whether in development testing or production could ever yield. Testing is an important aspect of what makes Drupal a great platform to work with. This is a strong selling point for our clients. They understand this, and that's the reason why many of them are attracted to Drupal. For the next period, we have the opportunity to look at BHAT and how it can help us to deliver quality. In this room, we have represented various levels of experience, I'd imagine, with a variety of areas of expertise, but we all have something in common, and it's Drupal that brings us to this conference today, but I'm not going to focus so much upon Drupal because this is a tool that can apply whatever platform you use, but we have that commonality of all being involved in the development of web applications. Not only do we have many diverse disciplines represented here, which should be the case in the DevOps track, and I don't know how about you, but the first time I come across the word DevOps, it was almost like the first time I came across the word email, I just had no idea what that meant, and please explain it to me, and you kind of like look to point the finger and think, is he the DevOps guy, is he the DevOps guy, and then as you learn about it more, it's rather than it being the role of an individual, it's a principle that needs to be shared amongst all members of the team. We gather some requirements, we develop some designs, and maybe we build some interactive prototypes, and then perhaps we begin development, and these are of course loose definitions of far more involved processes, but it's not uncommon for us to sort of check over our work and make sure that it's working at least how we expect, and then when we believe that we've met, we've reached the point that we've met our obligation to the client, we then show the client, and if the client likes what they see then we'll push it to live or we'll go into the next phase of development. Each piece of this process might be handled by a distinct member of the team. We're all good at what we do in this room, right? So when things go wrong and invariably they do, who do we blame for this failure? Maybe the client's expectations, they see the site and they say it doesn't quite work as we anticipated, so who's at fault? Did the client not explain their requirements well enough? Maybe they didn't even know what they wanted, and the loose specifications document that was produced as a result was seen as a sort of vote of confidence in your team. They believed in your team so much that they knew that whatever you deliver would just be exactly what they needed, and that might have happened for you, this rare phenomenon, but the problem with proceeding with this loose specifications document is that it's a weak defence when the client's expectations have not been met for you to say, well, you didn't tell us, you know, you didn't explain it. Like as a developer I would be dissatisfied if I was not involved in delivering applications that met a real business need, you know, it's not just about delivering something, it's about delivering quality. So no team effort, in a team effort rather, no individual success can compensate for the project failure, either completely or in a stage of development. Just as in a football team there are different roles, you know, we might, the defence might have played a really good game, the goalkeeper, whatever, but if the team loses then that sting of project failure is going to, you know, it's going to be felt throughout no matter how well an individual played. So let's set our roles aside for a second. What is it that we in the application world do? We deliver all of us, you know, whatever specific piece of work that we do we should all have our eye on the prize, we should all have a route to success. So what is it that we are delivering? Theodore Levitt said people don't want a quarter inch drill, they want a quarter inch hole. As each member of the team understands what the real problem is and the real value in the application that we're developing then we can begin to engender more quality in the work that we do. By the way this is a principle that not only applies to software development but in attaining any goal. We must communicate amongst all stakeholders what the value is in the work that we are doing. Ultimately the complexity or the time intensity of the work that you've done will pale into insignificance when the work that you've done is considered to meet a real business need. Of course they're going to pay you for that work but they don't care that it took a long time, they don't care that it was really complex. If it meets a real business need, if you want to make your client happy then meet that need. Work hard, that's part of it. In order for us to ensure that we are solving the right problems we need to create a framework to offer assurances to all stakeholders that we are on the right track. This is a good time to talk about Dan North. Dan North created the first ever behavior driven development framework or BDD framework and that was called JBehave followed by a story level BDD framework for Ruby called RBehave which was later integrated into the RSpec project and RSpec was later replaced by the Cucumber project which some of you may have heard of. Dan North's Introducing BDD article appeared in the March 2006 edition of Better Software magazine. The idea of behavior driven development evolved as Dan articulated his response to many of the concerns that he was being faced with when encouraging development teams to approach their projects using test driven development. They would ask where do we begin testing? What do we test and what do we not test? How much do we test in one go and what shall we call our tests and how do we understand why a test fails? I hope to cover some of these responses as we look at what BHAT can deliver to us as a BDD framework but I recommend that you all read this article and I'll put a link to it in the slides. Key to all of this is that we need to communicate more effectively in our teams what the acceptance criteria are for the work that lies ahead. We need a common language. One sure way of not meeting the client's needs is for the development team to have their own way of describing things and they adopt a different language. If we cannot make the effort to adequately articulate the expected behavior of our application then we should not be in the business of developing quality enterprise standard applications. We need a ubiquitous language that acts as a vehicle for our communication between different roles in the software project. Everything that we do should revolve around a business value or user need. We have attended too many user story workshops where the focus of the meeting became more about convincing the client that we cared about the user than actually trying to draw out what the real problems were what the real solutions that the user needed. If a feature being discussed does not deliver a benefit to a given user then we should be in a position to challenge it. The process will help us to deliver not features but business values. The document that comes out of such an exercise can then become a measure for delivery. Our work is done when the agreed business needs have been met. So a feature declaration should contain a user, a benefit to that user and a feature that delivers that benefit. Consider the feature declaration as a website user. I want a user registration form so that the site admins can have my information. There we have a benefit that we should feel inclined to challenge because we need to consider what the benefit is for the user that is encountering the feature. So this process is not only just about getting requirements, it's about challenging requirements and ensuring that what you do will bring real business value. Unfortunately a feature declaration alone will not allow us to confidently deliver an acceptable product to our client. Our client will have or should be encouraged to develop expectations about the behaviour of a given feature. In the case of a user registration form, they must visualise the process of filling out that form and what would happen when the form is submitted. Because if they don't do it at the beginning of the project, you can bet they're going to do it when they actually test the thing. If our imagination is lacking in the planning stage, then it needs to be awakened because resource is expensive and our top developers are about to embark on solving the wrong problem. If we can adequately describe and document the various scenarios that our users will face when encountering this new or improved feature of our website, then we can find ourselves in a position where the client will say, if the feature behaves in the way that we have described here, then we consider this to be acceptable. So a story's behaviour is simply its acceptance criteria. Given some initial context when an event occurs, then ensure some outcomes. Let's take a look what a real example might look like. I've taken this example from the behat.org tutorial and the feature and question is search. This is a perfect introduction to the Gherkin language, which was encountered first with the Cucumber project. We can see here that we have a structure for how to lay out our feature declarations and the scenarios that describe the expected behaviour of this feature. This document is our high-level acceptance test. This document could be used by a tester to verify that the feature works as expected. Any testers in the room? Would you be happy with a document like that to accompany the user interface that you are about to test? You'd be clear on what was expected. So what are we delivering? We are delivering these clearly defined features that the client has deemed acceptable. Developers in the room, would you be happy with a document like the one we saw on the previous slide to direct your work on a given feature? I'm a developer and one of the first things that I concern myself with when I get a new project is what is it going to feel like at the end? Are we going to have a happy client at the end? When I get a loose specifications document, that worries me because it's not clear to me what I need to do in order to make the client happy, in order to meet their expectations, in order to bring value to them. We stop developing when the business value has been delivered. We may be tempted to deliver more than what is expected. This is a common trap with Drupal development, particularly with modules that do more than what you expect them to do. This might work out but you may wish to tread lightly because any feature or behavior will need to be supported and that new behavior which was not documented might not work in quite the way that the client expects and cause you more pain than you imagined. When we achieve the accepted behavior of a feature, we know that we can confidently deliver. In order to deliver, we must test and test often. A few years ago, I was banned for driving for six months. It wasn't for excessive speeding but it was for speeding on a number of occasions for accruing points. I ought to have verified my speed more frequently. I have the tools available for me to stay within the speed limit. In order to avoid speeding, I must know the acceptable speed in the area that I'm in and I must know what my current speed is. However frustrating it is to get a speeding ticket, imagine the struggle to stay within the speed limit if we did not have available to us the speed limit in the current area and if our car was not fitted with a speedometer. As a developer, I should be interested in my ability to meet an acceptable level of quality when working on a particular feature. In fact, this might be a good time to mention unit tests. I want to make it clear that there is no conflict with the kind of tests that will cover the behavior of an application and the tests that will ensure that the smallest components of our application that are maybe the unseen heroes of our application can and ought to be tested on a unit level. While unit tests can help to ensure that we build the right, build the thing right, acceptance tests ensure that we build the right thing. I'm making a habit of this, aren't I? So yeah, this is the slide. These slides will be available later. An acceptance test verifies that the feature works exactly the way the customer team expects it to. As mentioned, it ensures that we have built the right thing. When an acceptance test passes, it indicates that the stakeholder will deem your work acceptable. This is when applications go live, this is when final invoices are paid. So let's mention BHAT. BHAT is a PHP framework for testing your business expectations. It is heavily inspired by the Ruby Cucumber project, and we owe a lot to this man, Constantine Codria Show, for his dedication in successfully porting this project to PHP and being such an advocate for BDD. Maybe before this presentation is through or immediately after you can tell him on Twitter how excited you are about BHAT. The significance for us in the Drupal world is that this library is written in PHP, and many of us are quite confident programming in that language. So let's take a look at BHAT a little closer. Feature declarations are written in the Gherkin language. These documents are passed by the BHAT script library. The behaviour of the feature is simulated as the steps that have been written in the structured feature documents are used to trigger browser events or to report on what is returned in the browser. The result of the BHAT tests is a report telling you if the feature works as expected. The steps in each scenario are matched up with functions in a feature, in a feature context object using annotations. On the screen we see steps that are available in the mink context, which is a BHAT extension. This shows us how the human readable Gherkin language... What's that, sorry? On this one, this here is a screenshot, so I can't on this, I will on the demo. This shows us how the human readable Gherkin language of our feature declarations and scenarios is mapped to a test that can be automated for us. The step given I am on forward slash user is passed and triggers the visit method and passes the forward slash user as a parameter to that function. It is easy to see why someone may have packaged together a bunch of step declarations that we encounter commonly with all web applications. We will find that for many of our applications that many of the steps that we would call upon have actually been predefined in the mink extension. So it is possible to cover much of the behaviour of our applications without having to write much or any PHP code. Of course we should challenge the language used in the steps even though the behaviour of a step might match your requirements. If you do not believe that the human readable language is adequate then I suggest that you go ahead and write the step in a language that means something to your team and then write a PHP method in your feature context that routes to that function which will trigger the expected behaviour. Do not compromise on the language used. I was excited to learn that there is a Drupal extension available for BHAP that makes some additional steps available that may be considered common for Drupal sites and also that bridged some functional gaps that you would encounter on your ground. Otherwise, sorry, when you would be faced with a task of writing a function that determined that you were logged in on a Drupal site or actually logged you in so that you could perform some tests for authenticated users. The efforts of those who have contributed to the mink extension project and the Drupal, so yeah I just wanted to mention so we've got Jonathan Hedstrom and Melissa Anderson to thank for actively working on that project on the Drupal extension and the efforts of those of those two and others who have worked on the Drupal extension and those who have worked on the mink extension means that in some cases covering your web applications with BHAP tests will not require you to write a single line of PHP code because the steps available for these extensions may adequately cover the expected behaviour of the features of your application. BHAP is an acceptance testing framework with the mink extension enabled it becomes an acceptance testing framework for web applications. The browser is the window through which web users interact with web applications and other users. Users are always talking with web applications through the browser. In order to test that our web application behaves correctly we need a way to simulate this interaction between browser and web application in our test. We need mink to do this. Mink is a common gateway between our application and the browser. We still need a browser. If we want to test our applications in browsers that we're familiar with then we need to use the Selenium or Sahi service which should be running on the machine at the time that the tests are on. The benefit of using actual browsers is that you can determine that the site works as expected in browsers that are actually used in production. And also these regular browsers support JavaScript inherently. For many of the tests it may be fine to use a headless browser and the default that ships with BHAP is sort of the Goop browser. And we can benefit from the speed gains involved for the scenario as a website user. When I visit the homepage I should see five news articles unless that content is served up by Ajax then it might be sufficient to allow that to run in a headless browser. Phantom.js is a headless browser which Sahi can interface with. I have been able to use this with good results. We can target specific scenarios to be tested in a JavaScript browser while others can be run in a non-Java script headless browser. We do this by writing the tag at JavaScript above the appropriate scenarios and in the config.yaml file if we have listed which browser and driver to use for JavaScript then it will run that test in the JavaScript browser. So now I'm going to do a demo. Okay. Through the magic 3G. Okay so I've deliberately sort of honed around the tests that the features that are written on the BHAP.org website because I want everyone to come out of this sort of eager to learn, eager to do a bit more than you've already done. Some of you may be more experienced than me but if you haven't used BHAP before please go on the BHAP.org website and run through some of these kind of tutorials and you'll have a good experience with it. So the first thing that I wanted to sort of demonstrate is that we have a lot of step declarations available to us. What's the font size like there? No? It's not good. What can I do about that? Turn the light out? Or can we do the lights actually? Is that a possibility? I don't. Is that better? No? Come closer. Shall I go on or is it terrible? It's only going to be the prompts that are great. Okay let's go. Good. The first thing that I wanted to demo is the fact that like when you get BHAP down and we use the composer package managers to do that and I'm not going to cover the install here it's just too it'll take too long and we've got problems with the internet anyway but you install it and then the first time you're on BHAP not much will happen but if you there are there's a parameter that you can kind of pass that that lists all of the steps that are available. Actually you're seeing there what would be output with a mink context but I'm going to just revert back to using the BHAP context which is what it ships with and you'll see there it comes out with no step declaration so you're going to want to extend the mink extension so I'll go back and do that. Then you've got a bunch of sort of step declarations that will cover much of the behavior of a web application and you get even more with the Drupal extension. So the first feature that we're going to sort of demonstrate and we're going to build on this one is the just one scenario for a search feature so we've declared our search feature here we've specified a benefit we specified the user and we've specified the feature in a descriptive way in a way that could mean something to every member of the team and we've created a scenario given I am on this page when I fill in the search with behavior driven development and I press the search button then I should see agile software development and I'm going to run this in the headless Goop browser by default it would actually go through all of the features but I just want to target this one feature so I'm just going to use the tag that I've created for it called search zero one and it's stepping through there it's going to do this isn't it 3g yay so it's gone through so two steps have passed could be a long day eh how much time do we have because I forgot to start my time okay all right so it passed um we had one scenario there and then four steps and it all passed in our headless browser so I've just built up on that here and um I used a scenario but we want to pass multiple so we want to do multiple tests around that scenario we're not going to accept that it works just because we put in behavior driven development and it returned what we expected we'll put a number of things in there um and so what I've done here is um I've got that example rather than in a scenario it's a scenario outline which acts as a template and it will consider this to be one scenario because we've only provided one input but we could add to that um so we could add like um when I type in clowns I expect to see evil because you do right so let's run that again this is in the headless browser this would normally work much quicker okay so we had one set scenario outline but two that's considered to be two scenarios because we've run through it twice they both passed um and there were eight steps that passed so the next step I'm running that one again actually because of the speed I'm actually going to remove the we know that that kind of works correctly but I just wanted to introduce a second uh scenario so we're covering uh what searching for a page that does exist and now we're going to search for a page that does not exist um and we're just using the scenario rather than the scenario outline uh so um after this one I'll probably just uh rather than running all of them I'll target a couple um so we can just see some some of the other aspects of this had this been running quicker I would have liked to have taken to you about sort of 10 different ones um so I ran through the first one which was uh searching for a page expecting that the page would return a result um and then the other one is expecting that the page would not return a result given a search search term was added and both of those passed so we're building up you know our so library here um and here I've got a scenario outline just as before and I'm going to remove this one entirely just so that it'll run a bit quicker for us um and here I've converted this to a scenario outline and just so that that you can see this actually working in a real browser um I've tagged it with that javascript we don't need javascript to to run this um but we'll see it happening so um the bhat.yaml file which contains the default configuration for this just contains the path for where I want my feature context to live and also what the url is that I'm targeting so if you were targeting the dev url then you would probably have a separate bhat.yaml or there are other ways to do it which would specify the the dev url rather than the the production one and I've also specified that we're going to use goop as well but for the uh to test it in chrome I've um asked it to use sahi to generate the javascript session and to use the chrome uh browser uh sahi when you install it it's by default it's ready to go with chrome firefox and safari I believe I've also added support for phantom js not me added support but I enabled it um there's plenty of tutorials that are available online to help you to do that so I'm actually going to specify that we use that yaml file start sahi so loading up the page this actually might uh this will work because it doesn't require javascript we may struggle to demonstrate the javascript stuff because I have it die after a certain amount of time yeah which is reasonable yeah we still is it and it's still down so it's typed in fish and in england we eat fish and chips that should be expected and I'll break out after this one I think I add two um going so it's looking on the results page um I haven't seen that before but the first one run through successfully anyway so i'm happy enough to move on on that one um so on the next one um obviously that didn't require javascript to do that I just wanted to demo it coming through on chrome um now the next step that we want to do is um we want to test the autocomplete functionality um so when we're searching for a page with autocomplete when we fill in the search with a given term and you wait a designated amount a time three seconds should I increase 60 seconds then I should see behaviour driven development but there uh you know what like if we don't get to see this that it'd be a shame but um I'm there is a step here that I want to introduce so it's going to fail anyway um and I'm going to just remove this for now just to show you what I hope to demo here so this is uh I'm not going to use the um javascript browser here just going to use the goop browser still slow isn't it but um it's going to break basically on the step that says and I want and I wait a number of seconds that step doesn't exist yet and it'll prompt us to add that as a step in our feature context it's even going to give us the code to inject into the feature context as well so we can just copy and paste it in um there are there's also a parameter that you can pass called um append uh append snippets which will actually inject it into the feature context directly uh for you so you don't even need to copy and paste and when I saw that first time I thought man whoever implemented that has way more time than I have but I feel that about a lot of these projects as well I feel you know in awe of the the people that contribute these tools because they have put a lot of work in at their own cost and time and and we do appreciate that um so this this is uh timing out all over the place because they're 3g um but um needless to say the we're encouraged to use the step declarations that are available to us in the mink context and we've got the Drupal context available to us but we can add additional steps so if I go to the feature context it's quite bare at the moment I've added this I'll explain that one in a second but here we've got um like just an example script that that we could use to create our own step and I'm going to create a step that would have been useful for us if we wanted to wait a designated amount of time or you sub sublime gurus are probably thinking what the heck is he doing right now I'm learning okay so the uh the test that we wanted to run is the step that we wanted to run rather I'll just just jump straight ahead for this one so and I and I wait for the suggestion box to appear because the first one was and I wait a designated amount of time so it was and I wait three seconds and the cool thing that it would have done as well it would have generated an example method but it would have also detected that there was an integer in that string and it would have replaced that integer with a brackets uh back uh back slash uh d plus um so that it can actually pass the integer as a parameter to that function so that it makes it just more useful for when you you want to use the same step to just designate different to wait for different amounts of time now what it was decided um on this step declaration is that it didn't it wasn't really using language that meant something to the team we don't wait a designated amount of time we wait for the suggestion box to appear so we've changed the language here to say and I wait for the suggestion box to appear and um go to the feature context given uh it does something to that effect and then it'll it'll suggest that we write off methods in a way that resembles the the language of the step so it'll be something like this suggestion box to appear uh that doesn't need a parameter and then uh the command that we would the uh code that would allow us to wait that amount of time is something to this effect so that the next time it was run that step would it would come across that step it would match it up with the annotation uh that existed in that in the feature context it would run that function and then it would either wait five seconds which I've designated as a time out time or it would stop when the suggestion results box appeared and that can save us a bit of time on the javascript browser as well on it it also just enables it to work it it covers the expected behavior because that's what our users would expect to happen so again here I've got a scenario outline that that we would have used in that instance and it's waiting for the suggestion box to appear and we can just supply different results and if I'd have run this in the chrome browser we would have seen it filling the results the suggestion box appear it would verify that it's that the suggestion is there and then it would pass it and then it'd just run through them all and say that all your tests had passed. Another thing that I could have demoed as well is that on a failure it might be useful to sort of trigger an event and the other day on Twitter I saw that somebody had integrated it with Mantis which was their sort of issue queue platform and on a fail it would create an issue to notify the the appropriate team member to do that and you can attach a screenshot as well. PhantomJS allows you to write sort of scripts that can be run and there's this in my feature context here this after step run function gets run at the end of a step and I'm just saying here if the browser name is PhantomJS and the event and the event get result is step event failed then execute a script and that script takes a screenshot and it saves it into a designated folder and puts a date stamp on it so you can trigger other other events around there. Now there are a lot of the cool thing about BHAT is it's just a script library at the end of the day you can inject it into your code base so it can be part of you've got your doc route and then you've got your BHAT tests and a lot of people have made BHAT tests that cover sites that we're aware of like Drupal.org or Commerce Kickstart and you can just download them and run them on a site that's already up there and the BHAT tests for Commerce Kickstart have been put together by Graham Taylor. I think it's sort of a work in progress but it's just a proof of concept so you can Commerce Kickstart install it on the machine and then run these tests that are written up in these features. I apologise about the internet I would have liked to have shown you some flashy demos and stuff but I hope that the value in what we did see some demos we saw JavaScript working and I would have liked I've got Commerce Kickstart installed and I should have really thought ahead that the internet might die. Sorry, sorry, okay. So now that we've seen a demo, I hope that you can see that this is something that you could easily incorporate into your projects. I hope you can take something away from this presentation whatever your experience of applying user stories, developing acceptance tests or using BHAT itself. If you have never installed BHAT please over the next few days install it and have a play. If you've used BHAT before maybe you've been prompted to use it in a different way or maybe you are fairly familiar with BHAT but you were struggling to communicate to all the stakeholders the value in behaviour driven development and I hope maybe that I've helped you to articulate a little better the value in using BHAT to protect a project from failure and to assist you in delivering business values. So this is a tool for those who care about quality it's for those who care about delivery and it's a structured approach to delivering quality. For me the ultimate expression of automated acceptance testing is in aid in continuous integration and continuous deployment. This might be considered the crucial measure that our latest commit has not compromised the integrity of our application in its ability to behave as expected. Having the application covered with automated acceptance tests makes it easier to confidently deploy a fix or feature enhancement to a site. I'm not going to talk about the merits and challenges of continuous deployment but for those who would hope to be in a position to introduce automation through the release process can you envisage a scenario where you introduce a fix to a website and commit the code into the appropriate Git branch. The simple act of committing that code could trigger a chain of events that could deploy a clone of your production site in code files, database and environment. Your new code could be introduced and a few Drush commands automatically run to trigger the pending update hooks and clear cache. Because you have B-hat tests in the Git repo you should trigger those tests and simulate the behaviour of the whole site and determine whether everything works as expected. A successful run of the B-hat tests could trigger a Git tag being created and deployment being scheduled to the production environment. If the B-hat test fails then we could automatically generate a screenshot of the interface at the point of failure and advise the interested parties that an unexpected failure had occurred. For each release of our application we should only have the tests in the code that cover the behaviour of the application in its current state and not the tests that cover the behaviour of features yet to be developed. Integration is often very painful. It's a painful process. If this is true on your project, integrate every time somebody checks in and do it from the start of the project. If testing is a painful process that occurs just before release don't do it at the end. Instead do it continually from the beginning of the project. If releasing software is painful aim to release it every time somebody checks in a change that passes all the automated tests. Reaching a level of acceptance tests that can be considered a benchmark for quality would open up the door to such a possibility. Whatever the challenges are for implementing continuous delivery one thing is for sure we could not do it without automating our acceptance tests and B-hat helps us to do that. In closing I want to say something about the word acceptance or acceptable. I used it in my session title and we encounter it most commonly in the phase of project delivery called user acceptance testing. Maybe when you started out in this business you hoped to do more than acceptable work. Maybe when you started sorry you hoped you would be involved in delivering exceptional work. Let me say this in order to exceed a client's expectations you must first pass the post of achieving their expectations giving a client what they expect is not to be underestimated. The expected behaviour of a set of features designed to meet the real business needs of the users of an application is the foundation of exceptional work. Thank you. I invite any questions if you do have a question or comment please come to the mic and I do really want your comments as well if there's something that you've been itching for me to say you've been using B-hat in a cool way then please come up and tell us about it. Do you mind coming up to the mic is that okay because then we can get it on the tape it's just in the middle of the room. The practice of actually sitting down with your client and writing your specification in this format what are the tools for actually getting the things written you know you can sit down there and end up with pages and pages of these docs but what are you trying to do when you sit down with a client and write up the specification in this BDD language is that because that is part of your mission isn't it your spec is your test yeah well it's important first of all that whatever we produce is considered to be the definition you know so what we want to get away from is having you know this is the best we could do in this meeting and then have it translated you know and then we'll all we'll write the proper specification document at a later time because you'll lose some of the value of the application and so people are worried that these meetings that you have with the client to draw out all of the scenarios that might cover the accepted behaviour would be long and laborious but we only need to get to the point where as I said in a presentation if it works as documented in this document and we've got the facility here of the gherkin language so we can use that as a structure and I believe that there are plugins either being worked on or available for various text editors if it works as described here then it's acceptable and you need to get signed off on that yeah does that sort of answer it yeah it's just a matter of how easy is it to sit with your client in a meeting and write these things in a way that is useful so you don't rewrite them again so it's easier than handing the fallout of not doing it yeah thanks so Simon said that there is a gira extension which allows you to put the tests in gira and that's true um I know some people that are doing it I've not implemented it myself so you can actually put them in gira and it will um you enable the gira extension and actually sort of use the apis and get the tests from gira and that can be helpful as well but I think it's important that they're also in code as well so that we've got you can just check out the code and know what tests to run for that code because you might want to roll back for something like that yeah a question as well I'm kind of new to this maybe a stupid question but suppose you have a test and it it changes your data some way and you have a lot of tests how do you prevent that the change of the data in test A has an influence on another test um yeah um ideally every test every scenario should be used in isolation and there's a there's different ways of doing it but I think with a triple api extension so I was using the drush extension or I would have been using the drush extension to log into a site and perform a test as an automated user and is rightly it's making changes to the database but cleanup could be done after every test um so that it's as it so that you could target that test and no other test would impact upon it so you just after the test you delete delete the user if you've created a user so you may create a dummy user to run that test delete the content yeah okay thank you yeah that would be the the desired effect yeah that one was uh what we did is we created a tag which was destructive test so anything which we classed as destructive it wouldn't get run on live so if we made a deployment that created um like so it had a test that created this new content to see that it worked when we we'd run that on our dev environment but we never sort of went live with that and then we had a cleanup thing as well but yeah that's a really useful thing I'm going to repeat that because it wasn't on the mic so he's saying there that he um was able to identify um test that would be destructive and that you would never want to run on live some tests you're willing to run on live so uh rightly so you can tag those those tests those scenarios and just as I tagged um I was running just a specific tag if you rather than using the ampersand you use a tindle uh you can say um run everything except um and so that's what you've been doing there and he's found that useful um so with uh well uh behaviour tests you're really testing more the the user interface so the thing that you can actually explain to the client and then you have unit tests which are really technical sometimes there are some tests which are really so integration tests acceptance tests but are really something that only your developers will understand so you want to test some edge case how do you really integrate those do you put them in in some I want to test crazy edge case and then you actually implement that or there's probably a multitude of answers for that because it would depend upon circumstances it may be that if you cannot describe it in a way that actually aligns it with some business value then then maybe there's a missed opportunity there and you could go back to the drawing board a little bit because it's sort of well documented that if you actually tie everything around a user need that that you can sort it's a process of creation of innovation but um people are using be hat in different ways the be hat itself without the mink extension it's an acceptance testing framework but and people are using it for unit tests functional tests and so if it helps use it yeah I think the key message here and I think that what I really like about the guys who have written be hat is they're saying the the right things initially when Constantine um began writing the be hat tool he didn't envisage using it in the ways that it's being used and the first time that I saw him present on it was at Symphony live London last year and then I saw him again in Manchester um and he didn't do a demo at all and I was like um if I'd have known ahead of time that he wasn't going to show us his mad vim skills then I would have been disappointed because in London he was doing it in front of your eyes sort of like inputting stuff like tests doing tests and then and then um not doing any development until there's a test to cover it but what he talked about is its ability to assist you in delivering quality and ensuring that that agile process is protected and and this is the message that we need testing is important but what we've struggled with is selling it to every member of the team this isn't just about testing it's about delivering what the client needs thank you