 Hi everyone, my name is Cameron and I'm the Track Chair for coding and development for DripCon Prague. It's my pleasure to introduce one of our featured speakers, Sebastian Bergman, who as I'm sure you know is the author of PHP Unit, has over 12 years of experience writing and testing PHP code. So, Sebastian, please. Thank you, Cameron. I should probably start with a disclaimer that I have almost no knowledge at all about Drupal, yet I have known that it exists for quite some time, but I've never actually got my hands on it, which is not that true anymore because while preparing this session today I looked a little bit at Drupal 8, but I get into some specifics there in a bit. So usually presentations at conferences start with the presenter introducing themselves, but I wanted to do it with a little bit of a twist, and wanted to start with you guys. So this is you, this is a Drupal community, a community of at least as far as I could research by Googling around the interwebs, about 30,000 developers at last count, like 29,000 and something, people with an account that work on the Drupal core, and tens of thousands of add-ons around Drupal. That is probably one of the biggest open source projects out there, at least as far as a number of developers are concerned and people using the product. And as far as I can tell, the external quality of Drupal is really good. It's really easy to get started, really easy to download and install and get a good system running and just some configuring and theming and tweaking from there, and in most cases you're all set, but if you have to dive deeper and do customization sets also possible, of course. Now the Drupal community is not alone out there in the world, and it's part of a much larger community, the PHP community. I think there's a session at this Drupalcon that has the title from not invented here to proudly found elsewhere. And this title really catches what is currently going on in the Drupal world, as least as far as I can tell from a real outside perspective. What I don't know, a long time ago I first came across Drupal and back then I was looking at various frameworks and CMS systems that were written in PHP and were open source and I wanted to look at the code and learn a bit about code and how PHP is used out there, what kind of mistakes developers run into, and that was my first exposure to Drupal. And then I didn't look at it for quite a while and then a while ago I heard that the Drupal community is looking at Symphony 2 to replace some low level parts of the software with components that were invented somewhere else. So that is also something that I have seen a lot in both open source systems and especially in commercial systems over the last couple of years when it comes to modernizing legacy systems, for instance, looking at bits and pieces, at components, at parts of the software and realizing, okay, this solves a very specific problem, but there's a component out there that almost solves this or most likely exactly solves this problem and that has been developed using state of the art design patterns or best practices and is well tested and well documented and it's open source so I can just take my code that served its purpose for five years, ten years, whatever, and replace it with something that I don't really have to care about anymore. So that is a really good way of modernizing a code base and getting into testing. So with that said, the Drupal world seems to be changing a lot. More and more Symphony influences and other influences from the PHP ecosystem. Things that have happened in the PHP world over the last couple of years are finding their way into the Drupal world. So with the adoption of Symphony components, for instance, as the foundation of Drupal 8, Drupal now interoperates with at least one major player in the PHP ecosystem, which makes it for both sides really interesting. Drupal developers suddenly have lots of resources available to use in their custom development based on top of Drupal. They can reuse code that has been originally written for Symphony, for instance, and it also works the other way around, of course. And there's not just the Symphony world out there. There's many other communities out there, like the Zen Framework community or the Type of 3 community, which has their own framework now called Flow 3, and lots and lots of libraries that are really interesting and solving really hard problems in really good ways, like Doctrine, for instance. So who am I? Why am I standing here at the front at the podium? Why should you listen to me? You probably or maybe don't know who I am. So Cameron already introduced me a bit, and, well, if you download Drupal 8 from Git and run its test suite, that might be the first time that you read my name. I promise you, or, well, not really a promise. I have no idea why I actually put my name in there back when I wrote the first version of PHP unit. Just seemed like a good idea at the time, and Erich Gamma and Kent Beck were doing exactly the same thing in the very first version of JUnit. So I thought, what the heck, why shouldn't I do this when I do this for PHP? So this might be your first exposure to my name for a lot of people it is. So every time you run your test suite, you see my name. Sorry about that. Blame me. Lots and lots of people on the interwebs blame me for my tools telling them that their code is broken. By now, there's not much that I can do about that. Sure, it would be a one-line fix to remove that, but it's a tradition now for over 12 years now. So in his session at Drupal Comportland, Anthony Ferrara and already warned him about the next slide. He had a photo of me in his slides and talked a little bit about me. And that's him. He also talked here yesterday, I think. And he said that I have done more for PHP quality than any single other person. I'm not entirely sure about that, but that's me. For over a decade now, I've been working on tools that I hope, and to some extent I believe, make the lives of PHP developers easier, less painful, better. I like building tools that help developers make the job easier. And for me, it was a really slippery slope from starting to use PHP into working on PHP and tools around it. I really like the community and still like the community. Yes, it's sometimes painful to work on the core, lots of discussions that can get nasty and painful and tedious at times on the mailing list. But at the end of the day, I've got to know a lot of these people personally and in real life most of them are much nicer than they appear on the mailing list. So that's what I do for a living, founded a consulting company, but I'm not here to do any marketing. Just one sentence maybe, every day I try to help PHP developers or developers make better use of the PHP platform, make their lives easier, for instance, by helping them efficiently implement PHP unit into their workflow. So Anthony also thinks that I am awesome. I don't know about that, but he thinks that I'm awesome because he had an idea that came to him during his session at DrupalCon Portland. He tweeted about that. I picked that up, times on difference, about nine hours when I woke up. Real-time Germany in the hotel really early in the morning because there was a garbage truck outside my window at 5am, 6am in the morning. I saw his tweet, liked the idea and implemented it before breakfast. And that's now a feature in PHP LLC, one of the tools that I have implemented that gives a really quick look into the structure and the size and the quality metrics of a PHP-based project. So what Anthony is not saying, however, is that I only get to be awesome, if I am awesome at all, I don't know, because the PHP community as a whole is awesome. There's been lots of maturing going on, both on the personal level, on the PHP level as the core, and in the many, many projects that have been built in PHP and around PHP in the last two decades since PHP came out. And yes, most of the tools that deal with quality assurance things in the PHP world have been built either by me or by other guys from Germany. And I have no idea why. So Anthony's theory was that Germans have a thing for quality, whatever that might be. Be that as it may, I can only speak for my personal reason why I started working on tools such as PHP unit. And I guess you can call that pain-driven development. I don't know. When I started to work with PHP, there were no such tools. And at that time, I was in a really lucky position because I was attending university in Bonn and had a professor who sort of was friends with Erich Gamma and Kent Beck and had a really early version of JUnit and was so excited about that and was immediately teaching that to us, showing it to us and said, here, this is really awesome. We should all be developing like this. And he knew that I was doing lots of things with PHP and he said, well, now you can come to the Java side because you don't have these tools and we do. And now you get serious and learn a real programming language or whatever. I don't know. And I said, no, I like this idea, but I want tools like these in PHP because I love PHP. I love the community around it. And yes, we miss these tools, but somebody will eventually write them. I waited for a couple of months. Nobody was writing them. So that happened. It was not subversion back then. At some point, the CVS repository that was hosted at CVS.php.net was migrated to subversion and then less than a year after that migrated to Git. Yes, it took the PHP community a really long time to migrate from CVS to subversion only then to realize, hey, subversion is really painful as well in some ways even more so. And then we were really late in adopting Git, but eventually we did. And at least according to version control, I started working on PHP unit in November 2001. I know that I've been working on it for at least six months before that, but that was the day when I said, okay, this is finally something that I feel comfortable with sharing with the PHP community at large. And for at least two or three years, nobody except me was interested in PHP unit and testing things in the PHP world. The first time that I spoke at a conference about PHP unit and testing things, two people showed up in the room. And one told me afterwards that she was only in the room because this was the only session in German in that slot. Turned out there was only one guy actually interested in that. But what have we actually learned in these almost 12 years since I open sourced PHP unit? What can we learn from that? What can a Drupal community learn from that when it comes to now adding tests to Drupal, related projects? As far as I was able to research in preparation for this, Drupal 7 introduced simple tests into the workflow of testing things, which by the way, was started around the same time in 2001 when I started to work on PHP unit. It just took us a couple of years for each other to figure out that somebody else was working on the same problem. So a couple of years ago, I met Marcus Baker, the guy who started the simple test project, and he told me he wanted something like JUnit for PHP. He didn't find it. Well, actually, he found the same thing that I found when I searched for it, which was a project named PHP unit that was hosted on Sourceforge that claimed to be a port of JUnit for PHP, but it didn't work with PHP before. It was written for PHP 3. I couldn't get it to work with PHP 4. I even recompiled PHP 3, which I was not using back then anymore, just to try it out, and it didn't even work with PHP 3. Or at least I couldn't get it to work with PHP 3. So I wrote mine from scratch and named it also PHP unit, and Marcus just said, OK, PHP unit is already taken, so I have to come up with a different name. And that's why he shows a different name, simple test. So what have we learned about testing in the PHP world in these last 12 years? Well, personally, for me, the biggest learning that I made is that not all problems can be solved with a tool. Most problems that teams face or organizations face when they introduce a new concept, a new process, such as unit testing, are cultural problems, personal problems, organizational issues, communication issues, team issues. Somebody needs to convince that you need to do something or that you want to do something new that you didn't do before. And doing something new, and in addition to what you've been doing all along, is an extra activity and something doing extra costs, extra money. So why do you suddenly want to care about quality? And why do you want to write tests? What is the benefit of that? And it took a couple of years until developers realized that this makes a lot of sense, and by now there are plenty of case studies out there where you can read hard numbers that tell you developers gain after their second or third project when they do unit testing between 15 and 30% of productivity and they deliver features faster and deliver more robust features and so on. It took a while for developers to realize the benefits. And then developers had to convince their management, management had to convince customers, especially if it's a company that does not develop software that they use themselves, like an online shop, for instance, that develops their own software and hosts their own business. But an agency that develops projects for clients, then you suddenly need to convince people, hey, we are doing this new thing called testing. Something that you probably expect us to be doing all along, but now we are doing it for real. And we have to charge extra for that because it takes longer to write the software. Lots and lots of interesting discussions there. And a really good friend of mine, Judith Andrezen wrote an excellent short ebook giving developers two things. First, mechanisms for how to come up with these numbers to convince the management and much more important communication skills to sell management and customer on why it is a really good idea to focus on testing and build software of high quality. Sure, there are projects where you know that the lifespan of a software is limited. When you know this is only going to be online for two weeks or a month, then it doesn't really make sense to build something that can be easily changed for changing requirements or optimized or tested or whatever. But software usually lives longer than originally anticipated, so it might make sense to put at least a little bit of testing in there. But tests and quality in general are an investment into the future. It pays off in the long run. If you have to maintain a software over a long period of time, you should not slack on quality. You should not accrue too much technical debt over time. So what does PHP Unit allow you to do? PHP Unit is basically two things. It's on the one hand a framework that makes it really easy and convenient to write tests and express what the software is supposed to be doing. And on the other hand, it's a tool that runs these tests and gives you statistics about the test and insight into what parts of the code are actually run when the tests are executed and so on. But there are many different shapes and sizes of tests. And just because you can express them with PHP Unit and run them with PHP Unit does not necessarily mean that you should run them with PHP Unit. PHP Unit is best suited for what the name suggests, so-called unit tests, and the unit test tests a unit of code isolated from all of its collaborators. So this could be a function or a procedure, for instance, or the method of a class. And then you just invoke that functional method with this specific input and you know what the expected output should be and you write it into the test and PHP Unit can tell you yes, when I run this functional method and provide these arguments, then I get this result back. And that lets you verify that this one unit of code for a specific input works as you expect. PHP Unit also can be used and works really well for so-called integration tests when you have two units of code that you test in collaboration. Just because you have two units of code and have tested them with real unit tests isolated from each other does not necessarily mean that they work correctly when you don't isolate them from each other, when you have them in production and they actually have to interact because that's what they're supposed to do. There's a famous example that I heard about when I, in a lecture on software quality back in university. There was a probe that was sent into space and was supposed to land on the planet somewhere. And the development of that navigation system, of that probe was distributed between two teams, one team in the US and one team in Europe. One at NASA, one at ASA, the European Space Agency. And both teams really rigorously tested their pieces of the code, but they forgot to write a single test that tested the two components together. Well, to a point, they got that integration test when the probe tried to land where it was supposed to land. And the surface was there sooner than the navigation system expected and it crashed, like a really hard crash. Hardware loss and complete mission and failure. What happened? The NASA guys had used the imperial system and European Space Agency developers had used the metric system for separately tested, everything was fine. But of course, the numbers were wrong when you tried to bring the two components together. So I never could really find out whether that was just an urban legend or whether it actually happened. But it sounds so bad that it has to be true, because nobody can really come up with a story like that. But of course, you also want to test a unit of code that has collaborators without that collaborator and that's where stubbing and mocking comes into play and PHP unit. And other libraries such as fake or mockery in the PHP world make it really easy to programmatically say for this test, I want to test this component A and in real life in production, component A collaborates with component B and for this test, I just want something that looks like the real B. But when I invoke this method, for instance, then I get a hard coded value back and not execute the actual code of component B. Why do we want to test the unit of code isolated from the collaborators? Well, when the test fails, it pays us back with much more useful information because we can really pinpoint the root cause of why the test has failed to the unit of code that we're currently testing. It tells us exactly, hey, I'm only running these 10, 15 lines of code that are in the method that we're invoking in our test and not in a component that is invoked in production from those 10, 15 lines of code. Makes the test much more valuable. Of course, if we want to be able to do this real unit testing and stopping and mocking, the software that we're testing has to be testable. And that's something that projects like Symphony found out when they started to test their code. This is a visualization of the dependencies between the core classes of Symphony 1. I think this is Symphony 1.0 or 1.1. If Fabian were in the room, he could probably tell us if this is 1.0 or 1.1. It's a long time ago, so things back then were really bad, lots and lots of circular dependencies between components. So it was really, really hard to test just one component, one class in isolation from all of its collaborators because there were just so many collaborators and the dependencies were hard coded in the code. There was no dependency injection, so you could not simply replace component B with something that looked like component B while you were testing component A that makes use of component B. So the Symphony project did was what all software projects do that go through this phase, through this process of adding testing. They cleaned up their dependencies. They made them really simple. And I think this is from Symphony 1.2 and they made it even for an even better solution with this real dependency injection and their dependency injection container in Symphony 2. But just by looking at this, it looks much cleaner because there are no dependencies between the classes anymore except that all of the core classes have a dependency on the event dispatcher and use that to communicate with each other. So we have loose coupling and if we want to test the logger, for instance, we just need something that looks like the event dispatcher and configure that in such a way that it makes sense. It provides the behavior that we need for the test that we want to do on the logger class. By the way, if you have questions, feel free to interrupt anytime. You don't have to wait until the end. So, as I mentioned, tests come in different shapes and sizes. And one really big topic within the testing community at large, across all different programming languages and programming stacks and platforms over the last couple of years, was finding a good categorization of these different kinds of tests. And there was a trend or there is a trend of using pyramids to visualize these different types and shapes and sizes of tests. And here, I have my three favorite pyramids. There are many, many more out there just Google for test pyramid. And you see many, many different ones. So the one on the left from Alyssa Scott just divides tests into two categories. Business-facing tests and technology-facing tests. A technology-facing test would be something like the unit tests that I mentioned earlier that verify that the code that you currently have works correctly with regards to how the developer that wrote the code understood the code was supposed to work. That does not necessarily mean that the code does what the customer wants, or what the management wants, or what the user of the software wants. It just means the developer has understood how a certain aspect of the software is supposed to work, wrote that code, and then wrote a test that verified that the code works like he understood it's supposed to work. In addition to that, we have the business-facing tests that test that the software does what the user expects or what the customer expects. And of course, such a test has a larger scope. We don't really test on the unit level anymore. We need to test at least multiple units, if not the whole system. And those tests are really useful. But when the test fails, it can only tell us this feature, for instance, does not work currently. Doesn't really tell us where the bug may be hiding in the code. That's what the technology facing tests are for. The pyramid in the middle, that's something that Martin Fowler came up with, which is basically just a variation of the one on the left. He just has more technical terms for it and the service layer in between that allows him to use unit tests at the foundation of the pyramid to test the technology, test the units in isolation from each other, test multiple units that provides some sort of service with the service layer tests, and then the front end, for instance, in a web application that can be tested using user interface tests. Ideally, without running any back end code. And the one on the right with the missing tip is from Google. And Google internally just categorizes into small, medium, and large. The reason why the tip is missing is because there are very, very few tests that are categorized as being humongous. And they are not normal. So they didn't put them into the pyramid. And also the word humongous is probably too large to fit into the tip and still be readable. So that might be another reason for that. But all of these pyramids share the same philosophy that if you go down in the pyramid, the tests become more isolated and smaller in scope and faster to execute. And when the test fails, points directly at where you should start debugging and fixing the problem. And as you go up in the pyramid, you see tests that improve the confidence in the whole system. Does the software do what it is supposed to be doing? Not just is the software doing what it is currently doing correctly. And you need both kinds of tests. You need to verify that the code that is currently there works correctly. And you need tests that make sure that the software as a whole does what it is supposed to be doing. Now for that, there are different tools that may be better suited to implement these kinds of tests in PHP unit. It comes basically down to a matter of taste. I've worked with teams that prefer doing all of these tests in PHP unit and I've worked with teams that use PHP unit for the unit tests and the integration tests. But something like Behat, which is what I'm showing here, for end-to-end tests of testing features using the whole system and not testing units of code in isolation. So the idea behind tools such as Behat is that you write down the test, not in code, but in natural language. Or as close to natural language as you can get, you need to strike a balance between having something that a non-developer can read and understand and ideally write down as a specification, as a requirement for a certain feature. And you need to strike a balance between that and the testing tool being able to automatically figure out what the hell you think the system should do by written down in natural language and then actually do that, map that to code. So if we write this down like given I have a fresh bank account, whatever that may be, then the balance of that bank account should be zero. And having just that in a text file that is saved to a file that ends in dot feature, running Behat will run that test automatically for you. Unfortunately not. There's no artificial intelligence built in there. There's no crystal ball that just figures out what you mean. When you say given I have a fresh bank account and the balance should be zero, you have to tell the testing tool what you mean with these sentences. You have to give these sentences meaning. And you do that by implementing the so-called Behat context. And in this example, we have just two methods, fresh bank account, which creates a new object of the bank account class and puts that into an attribute. And then use an annotation that in turn uses a regular expression to map that to the natural language. So from now on, whenever Behat reads given I have a fresh bank account, it will invoke this fresh bank account method which in turn will create an object of the bank account class, store that somewhere for later stages of the test to use. And the same thing with balance should be. The interesting thing here is that there's an argument, which is the balance. And then you just need to provide a regular expression that puts whatever comes after it should be into that argument automatically for you. And then you use assert equals, which as it so happens is code that is reused from PHP unit. And once you have written down this feature context, then you can write these sentences and have code that automatically verifies that for you. Now, as I said earlier, I've worked with teams that really like this approach. But in the long run, it tends to be a lot of work maintaining these contexts and adding new language to them as you add new features. I've never actually seen it work that non-technical people correctly write down a feature specification so that it directly works. Usually requires additional tweaking by the technical people. What I have seen, however, is that non-technical people are really able to understand the output and look at the code and see, okay, this is my test, yep, given, okay, then, okay, I get that. And this is exactly what I want. And if your tool tells me that the software currently behaves in the way that is kind of written there in natural language, then that's cool. But I've also met many teams that said, well, we tried that. It didn't work for us. It was too much maintenance. And our non-technical people didn't really connect with the idea. So we just used PHP unit or PHP unit in combination with other tools. And I'll get to that in a bit instead of using vHAT for that. At the end of the day, it's a matter of taste. You need to have a look at these different tools and find what feels right for you and what works well for you. That was a question. I'll try to repeat what I heard. Please correct me if I'm wrong. There's some noise coming out from the AC, I think, that makes it really hard. So your question was, do I think that this is because they don't have to write technical people or to write non-technical people or that the communication doesn't work well between them? Or what was the question? Do we need more technical, non-technical people? That is an interesting question. So, okay, now I got the question. I'm trying to come up with an answer. I'm not sure if that's going to help. But let me tell you a story. There's this really popular testing framework and tool called Selenium. So Selenium started originally rollback. So Selenium right now comes, for instance, as a plug-in for Firefox that allows you to record every action that you do with the real keyboard and the real mouse in the browser. And then you can save that. And if you save that, the default format for saving this test that we have recorded with Selenium and can play back with Selenium Remote Control, for instance, which can automate everything from a single browser on one operating system to many browsers in different versions on different operating systems. So for instance, perform a browser compatibility testing. So when the default format for saving these tests from Selenium is HTML, you get an HTML table. And the original idea behind that was, hey, it's HTML. It's something that I can open up in the browser and hit the print button. So I can staple that and put it in filing somewhere and have paper documentation for what my test does. And I can potentially give that to someone as instructions what they should perform when they do testing. And I've actually seen that at one company where they were doing it. They had a pile of paper, it's this high. And every time they made a release, they started with the first page and went to page 450 or whatever, and tried to replay that in the browser. They didn't know that they could just load that in Selenium IDE or run a script and automatically play back the test. So that's part one of the reason why they chose HTML. The other reason was, so in that table, in every row is one command that was recorded in the browser or that you want to send to the browser. Like for instance, select this form field and put in this data. And then next action would be click on the submit button. Next action, wait until the page has loaded. Next action, verify that the following string is somewhere in the HTML or that an X path expression matches or that CSS selector matches, whatever you want to use to verify that the thing happened, that you want to have happened. Now the other part of the idea for using HTML was we can teach this vocabulary that what you're allowed to use in each first column of these rows of my table. So the Selenium table has three columns. First one is the name of the command and column two and three are the arguments for that command. Like the target and the value for instance. We can teach this vocabulary of about 200 commands and their arguments to a non-technical person. And they use Excel or another spreadsheet program and write down that test as a spreadsheet and then save that as HTML. It didn't work. Next step after that was the Gherkin based approach like what Cucumber uses, what Behead uses, come up with natural language. Make additional work for the developers of right implementing a mapping between the natural language and the test framework. And that seems to work at least for half the teams that I've worked with. But whether or not we need more technical, non-technical people, I don't know. Yeah, but I hope that gave you at least a little bit of insight into what's going on with these sorts of tools. Any other questions? Yes, especially if you're sitting right next to the microphone. Hello. Yes, so it's not really a question actually. I just want to share some of our experience. So it's not more about editing the stuff. We are happy to edit because if you look at the customer, it's very hard to, these things are very useful. Not for kind of developer thing, it's very useful for agreeing with your customer what they want. Because that's one of the worst thing, you'd write some use cases or write something and then when you deliver the stuff and it might be very different, but this one you can run it and show them, this is what you asked. So maybe it's a developer work, after they do something, you need to do some changes and you need to fix it. But it's very helpful at the end of the sprint. You can show the stuff and then you can run these test cases and show all of them green parts. Yep, I totally agree. And I agree that one of the most important tasks of software development is the developers and whoever decides what is supposed to be developed. Be it an internal products team or the customer or whoever. Who wants to use the software, sit down and write down in simple language in the form like this. What each feature is about, what the software should be doing. Now whether or not you immediately use that with a tool to run that or if you manually transfer that into a test that can be automatically executed. That is something that the developers have to deal with. And where the developers have to figure out what works best for them. But it is essential for good software development to have this. And at the end of the day, that's one of those problems that cannot be solved with a tool. Tools can help, but you need to come up with that and that's the communications problem. Developers and non-developers need to talk. And unfortunately I've been to plenty of companies where this talking between technical and non-technical people does not really work. In really extreme cases, different teams of technical people don't talk to each other. The most extreme case that I've had was the software developers were not allowed to talk to the operations people. When they wanted to make a deployment, they wrote the software onto a CD and put that CD in an envelope and use the house mail to send it to Ops. And at any random point between now and never, Ops would deploy that, either unchanged or with random changes that they did not communicate back. And they basically wanted to know which tool to introduce into their process to fix the problem that they were only able to deploy once or twice a year. That's not something that you can solve with a tool. That's a people problem or a process problem or whatever you want to call it. If the communication doesn't work, then the software project is doomed. So we've looked at unit tests and integration tests. And to be fair, B-HAT is usually not used for the kinds of tests that I've just shown. So that was basically a unit test written down in B-HAT language or B-HAT syntax just to make a point how that would work. Because in that case, the feature context actually fits on one screen. And in a font size, it allows you to read it. The next larger tests are so-called edge-to-edge tests, which is a fairly new term for some form of testing. It has been done for a very long time. It's one of those things where it took a really long time to come up with a name for it that everyone agrees on. So edge-to-edge tests are tests that are as end-to-end as possible. And in a web application, end-to-end would mean using a real browser, sending a real request to a real server, having the whole application generate a response and sending a response back via HTTP to a real browser. And then looking inside the browser to figure out what is going on. That would be an end-to-end test. And an edge-to-edge test is as end-to-end as possible without using a real server and a real browser. And the benefit of that is that it's much quicker to execute. And you don't need a really large test environment that involves a real server and a real browser. And that was the question. I was going to ask about the term acceptance test, because that's the one that I usually use. And how you thought that might not be well suited, but the difference between end-to-end and edge-to-edge there sounds like the important distinction that's maybe missing with acceptance tests. Okay, so there are different ways of categorizing testing. And acceptance testing does not necessarily say something about the scope in which the actual test, in which the code runs. You can write acceptance tests on the edge-to-edge level, and you can write them on the end-to-end level. I prefer to write them on the edge-to-edge level, because then they are just as expressive and valuable, but really quick to execute because I don't need my full stack. If my architecture allows me to instantiate the application, for instance, and fake a request and then get a response back and lets me introduce the response inside the test, then there is no difference with regards to the power of this test. Thinking about acceptance testing, then if I would use Selenium, for instance, to fire up a browser instance, make the request to the real server, looking at the response as it comes back and do something in there. That does not mean that end-to-end tests as implemented using Selenium, for instance, are not useful. You just should make sure that you only have very few of those for what provides benefit over edge-to-edge tests. For instance, with Selenium, you can run JavaScript, which is something that you cannot do with PHP, which is wrong because there is somebody who was crazy enough to wrap SpiderMonkey as a PHP extension. Yes, it works. No, that does not mean that you should be using it. Crazy things crazy people do when they have too much free time. Anyway, that's probably a good reason why somebody did that. Anyway, Selenium can also be really helpful if you're doing browser compatibility testing. Just because it works fine in Firefox and Linux does not necessarily mean that it works fine in Opera on Windows or in Safari on Windows if you're really into pain or Internet Explorer or whatever. That's what I usually use Selenium for, to make sure that that works, but then really try to do as much edge-to-edge testing for acceptance tests as possible. So this is what such an test could look like. And this is actually using the... Oh, this is an example from Symphony 2. Pretty straightforward implementation of a controller in Symphony 2. It does something with a bank account class and renders a template using Twig and so on. And Symphony has a built-in extension to PHP unit that makes it really easy to write edge-to-edge tests for a Symphony application. That's the web test case. You can just say, okay, create a client and the client is a fake HTTP client that sends a fake HTTP request against your Symphony application in the same PHP process on the command line, for instance, if you're using PHP unit. No web server necessary, no web browser necessary. You can just say, hey, fake a GET request to this URL and give me the response. And then you can look at the response and figure out whether or not the application did what it was supposed to be doing. And of course, this can only tell you, yes, this feature works. No, this feature doesn't work. And you have to do debugging from there. But this test serves the purpose of making sure that the feature is there, that the feature works. And when it doesn't work, you hopefully have unit tests as well that help you pinpoint where something is going wrong. And the next thing that you can do is end-to-end, as I just described earlier, using Selenium, for instance. You can also use BeHat for that. And BeHat comes out of the box or with a companion project called Mink that provides a feature context for testing web applications and for browser remote controlling. And with that, you can just say, okay, given I am on slash bank account slash one of my web application, then I should see the balance of bank account zero is one euro. And Mink takes care of understanding these sentences, creates the fake client, goes through that URL, fetches the response, and asserts that in the response, there is this string that you expect. And you can do this inside the same process with some modifications or use Mink to remote or to talk to Selenium to remotely control a real browser if you want to use that with Selenium RC, for instance. I've heard about this or rollback. I had the idea for something that this could be an interesting thing a while ago, but never had the time to experiment with it. And I saw a presentation in a blog post by Benjamin Aeberlei, one of the contributors to Doctrine and Symphony last week, where he actually went ahead and implemented that because he found teams or he worked with teams that wanted to use Mink, but didn't really like the be had approach of writing these feature contexts and this natural language things. So he wrote a bridge, a really small bridge between PHP unit and Mink that allows you to talk to the back end supported by Mink from within the PHP unit test. So that could also be a really interesting thing. And of course, one week and in 2006, while I was living in Norway, I got really bored and came across Selenium, which was really new at the time and thought that it would be a brilliant idea to talk to Selenium from within PHP unit. And implemented PHP unit Selenium, which is no longer developed by me. It's still an official PHP unit project, but I'm not contributing to it anymore. Giorgio Ziorini is doing a really great job on keeping that up to date with recent versions of PHP unit. Sorry, recent versions of Selenium. I just don't think that it's a good idea right now to use PHP unit to drive Selenium tests. Selenium tests are inherently slow to execute because for each test, you need to fire up a fresh Firefox instance. I need to prepare that for it to be remotely controllable. And at least on my machines, these always took at least one to two seconds per test. And if you have lots of these tests, then it takes a really long time and you have latency involved there and a real stack and everything that makes these tests really expensive. Other test runners for Selenium exist and these other test runners are able to run these tests in parallel. This is something that PHP unit in its current versions cannot do. When I started working on PHP unit 12 years ago, I didn't think about one day having a test suite with 100,000 tests, which is about the biggest test suite that I've seen so far inside one test suite. So even if all tests are quick to execute, you don't really want to run them in a single process anymore. You want at least to utilize as many cores as possible on a single machine and ideally distributing the execution among many machines if you have more than one machine available. But I didn't think about that 12 years ago. So the current architecture of PHP unit does not allow it to implement that. At some point, I'm going to completely rewrite PHP unit. At least I'm telling that myself and the users of PHP unit from year to year, I just never have the time to actually do it. But if I were to write PHP unit again, I would build a test runner in such a way that it's really easy to implement parallel and distributed test execution. But until that's there, I don't think it's a good idea, at least if you have many of these Selenium-based tests to run Selenium tests through PHP unit. Talking about costs and the value of a test, if you look at the scope of what you can test and at the business value of the part of the application or the part of the process that you're testing, you definitely don't want to have too many tests that have high complexity, that are really expensive to execute and to maintain and yield low business value. If you find that you're writing tests for functionality with low business value or infrastructure code, and it's really hard to write these tests, then you should rethink your architecture or your design. You really want to have the most tests in the lower right-hand side. Real unit tests that test one unit of code in isolation from all of its collaborators and code that has high business value. Of course, that's not always possible. You need to have acceptance tests or tests that are even larger than acceptance tests and those go into the category in the upper right-hand side, but you only want to have as many of those as you need to make sure that the application does what it is supposed to be doing. So I've worked, for instance, with one team that develops the software that is used for large e-commerce sites, and they are so paranoid that they actually test the logistics. So they have tests in there that test everything from a simulated user going to site, putting an item into their shopping cart, checking out, performing the payment, making sure that they get their money, and in the end, making sure that in the logistic center, the right item is taken off the shelf, put into a box, sent to the right address. They don't do that every day. They don't do that in continuous integration, but they do this on a regular basis to get insight into their logistics chain and find bottlenecks, for instance, or problems with the logistics company or whatever. But that's something that's not completely automated anymore, of course, that's something that's larger than even their own software because it crosses boundaries, not only with other software systems, but with real life. And it's completely asynchronously. You have to wait a couple of days until a package arrives or doesn't arrive. But if your architecture allows you to write or test almost everything of your code using real unit tests, test the unit of code isolated from all of its collaborators, and makes it really easy to write edge-to-edge tests that makes acceptance testing really easy, then this is usually a really good indicator that your architecture is good, and that also enables you to do a completely different kind of testing one that I already ran two minutes over, so I don't have time to really get into experiment-driven development, testing in production, cutting down the time from somebody who has a brilliant, or maybe not so brilliant, idea on how to improve the business and trying that idea out in production by rolling it out to small percentage of your user base and then using statistics to find out in A-B testing, whether or not that makes sense. At the end of the day, it comes down to architecture. If the architecture of the application does not make it easy to write real unit tests, chances are that you're going to have a hard time maintaining it and introduce modern, agile processes such as experiment-driven development, for instance. These slides will be made available at talks.thephp.cc, I'm also going to tweet them, tweet a link to them as soon as the network lets me. The network at this conference actually is quite good as far as I've seen so far, so let's hope that keeps on being good. And just last week, we announced the PHP curriculum in its international edition. Starting early next year, we are offering a formalized education for PHP developers that leads to the certified PHP craftsmen, so that might be something that you're interested in if you want to get into PHP development. Thank you. So I really hope that this was what you expected from this presentation, because this presentation was really odd for me, because I didn't write the abstract, I didn't come up with the title. When the DrupalCon organizers asked me if I would like to speak here, I said yes, and the next communication that I got or almost the next communication after that was, here's the title and here's the abstract. And then I figured, I tried to figure out what I was supposed to be doing, so I hope this met your expectations or at least was in some way useful to you. Thank you.