 Can you all hear me? Okay, we're good. Great. So first of all, welcome. This is my session called testing with monkeys using chaos for better code. So first of all, thank you all for coming down. Thanks for choosing this session. Also, thanks to the organizers of the event of Drupalcom, Barcelona. It's been fantastic. And thanks to the sponsors for making it all possible. So My name is Andrew Holgate. I'm the web technology lead for the world food program, which is a UN organization program. I've worked in the UN for almost five years now and I'm responsible for all of the public-facing Drupal platforms. I'm actually part of the communications division. So all of our projects, which we do, are public-facing and to do with engaging the public with our organization. I've been using Drupal since about 2008 when I began in a Drupal 5 project in Italy. And I've been working with technology since about 2003. I studied IT and psychology and I applied the IT in my my daily job. So firstly a bit about where I work, the world food program. People are often interested in knowing what the UN does with Drupal and how we use it. So firstly, the world food program is the world's largest humanitarian organization fighting hunger worldwide. We primarily work in emergency situations, such as in Syria at the moment with all the refugees. We've actually been there for several years but at the moment in the news because it's a European problem now as well or a world problem. I've worked in Nepal. There was an earthquake in Nepal earlier this year. So we've been very active in Nepal. And also with the Ebola outbreak crisis in West Africa. On average, we deliver food to more than 80 million people per year. So braiding them with food assistance. And this is basic food assistance. So it could be rice, it could be high-energy protein biscuits. And we do this in 75 countries around the world. We're primarily funded by, sorry, we're totally funded by volunteers. We don't receive any money from the United Nations. Most of the funding comes from governments, such as the American government, UK, Spanish, Italian. We also receive money from organizations, private companies and also the general public, like you guys. Our headquarters is based in Rome, Italy, but we have offices in 80 countries around the world. Interestingly, 88% of the staff of the UN World Food Program are not in the headquarters. They're based out in the field. So the field being where our beneficiaries require the help, where people require food assistance. So it's very much an operational organization that's out with the people helping. So how does the World Food Program use Drupal? We've been using it since early 2009 when we ported our current communications platform, wp.org, from a custom CMS to Drupal 6. All together we have around 10 Drupal platforms, many in Drupal 6, some in Drupal 7. And at present, we're rebuilding two major platforms in Drupal 8. That work actually began this month. So that's the wp.org, which is our main website. And also our intranet, which is called GO, the global office. That's a port from a life-frame installation to Drupal 8. So quickly a few examples. This is the main site, wp.org, 15 languages, 9 million unique users per year, huge peaks during emergencies. So when we do big campaigns, to ask for funding or to raise awareness about emergencies with huge peaks in traffic. And this is being redesigned at the moment in Drupal 8. The GIVE platform, which is our online fundraising tool, it's basically one big web form. It allows the editors to create new forms that can then be used for fundraising as well. It's a joint initiative also with Mastercard, who helped us build the API behind it as well. Lastly is FreeRice, maybe the most well-known project we've done. It's been around for about six years, I think. It's a multi-choice quiz game in five languages. It's been listed by Time Magazine as being one of the top 50 websites of the year. And that's built on Drupal 6 with a MongoDB backend. But let's talk about monkeys. That's what we're here for, right? So let me give some context to this session and introduce you to the infinite monkey theorem. So according to Wikipedia, the infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. So given random inputs, an infinite time, a recognizable text will eventually be produced because you have infinite time, so eventually something like the works of William Shakespeare will eventually be created. So in this case, the monkey is actually a metaphor. It's not a monkey, a natural monkey. It's a producer of randomness, a consistent randomness into the system. So with this theorem in mind, I'm going to present to you three tools today that will show you how you can control chaos and use randomness when you develop to produce higher quality code and more stable projects. So I'm going to start, I'm going to show you three tools. The first we're going to start off with is the front end, so the user interface. We'll then work our way down to the second tool. We'll be looking at the logic layer, so where the PHP sits, and we'll be looking at mutation testing for PHP unit tests. And then thirdly, the last layer will be infrastructure. So looking at ways we can destroy AWS instances and still maintain your application up and running. And at the end, we'll have time for questions as well. So let's start at the top of the stack with the user interface and monkey testing with gremlins. So when we design a user interface, we usually start, we usually test what it's designed to do. So we test the inputs that we have to give the interface. We then do some other tests, maybe some fringe cases where the input is actually incorrect. But you don't actually know how the user will behave when they use your interface. Where they're going to click, where they're going to scroll, where they're going to zoom in the case of mobile devices. So my normal experience is receive a specification to implement an interface, say a form, you implement it, you test for basic functionality, does it work. You hope you do some unit tests or end-to-end tests to guarantee that it works correctly as you've designed it. You run the tests, they'll pass, you're done. The interface is complete. But the way you design the form is not often the way the user actually uses it, as it's demonstrated here. It's clear what has to happen. It has to press, hit the domino, and all the dominoes fall down. This is often how people who design interfaces feel when users start to use it. Then they start to iterate and iterate until the users finally get how you use their interface. So monkey testing, I'd like to introduce the idea of monkey testing, which, according to Wikipedia, is an automated test that runs with no specific test in mind. So for example, a monkey test can enter random strings into text boxes to ensure that it's handling all of the possible user input. So it also generates input that maybe you didn't think about, that the user could potentially do. Like more like the real case with the girl pushing the domino over. So let me introduce you to the first tool, which is Gremlins.js. It's a monkey testing library written in JavaScript for browsers and also for Node.js as well. It's available on GitHub on the link at the bottom. It's by François, who's the same author of Faker. Faker is a great PHP mocking library for mocking PHP objects. So we're in good hands. So Gremlins can be installed directly into HTML, as I show in this example here, and that's the way I often do use it. It can be included as a module for Acquired.js and it can be run in your browser as a bookmark. You can actually test it on live sites as well. Okay, so how does Gremlins.js work? So it all starts with a friendly Mogwai. And in Gremlins.js, the Mogwai is the logging tool. This is the guy that lets you know what's happening, how your application is performing. It's a performance analysis tool. So he receives the information and lets you know through the browser console. The basic roles that the Gremlins has are the FPS Mogwai, so the frames per second. He lets you know if your application is running at a healthy 60 frames per second or if it's down to 10. So this is important when you're testing interfaces. The alert Mogwai can also stop any alerts or pop-ups happening. And then we have the really good Mogwai, who's Gizmo. And what Gizmo does is he calls the whole test off if he sees that you actually are really destroying the application. You've got to the point, you've reached a certain threshold and you are destroying it, he draws a line. He stops the attacks from occurring anymore. So here's the example of how it looks in your console. So here we see this is the FPS Mogwai, the frames per second. It's ticking over, pretty healthy 60 frames per second. So there aren't any attacks yet, so he's happy. He logs every half a second, every 500 milliseconds. He's logging the current frames per second of your application. He'll report when there are errors. And he'll also report when it drops below 10 frames per second, which is the default threshold. Okay, so we've covered the good guy, the Mogwai. So how about the bad guys? Now we have the gremlins. So the gremlins are there to try and destroy your user interface. So there are five types. There's the clicker gremlin, who clicks on visible parts of your interface. There's the toucher gremlin, who will touch and slide and multi-tap the interface. So more appropriate for touch interfaces like a mobile or tablet. There's the type of gremlin, who randomly type keys on the keyboard at certain places on the interface to see what happens. There's the form filler gremlin, who will specifically choose pieces of the form, whether it's a text area, it's a checkbox, it's a select. He'll start to type in random things there as well. And then there's the scroller gremlin, who will scroll horizontally and scroll vertically, trying to see if you can find a point of breaking of your application. So let's do it. Let's release a horde of gremlins on our give platform. So give is our donation platform. It's a very simple application of what comes out. There's no Ajax. It's purely one big HTML form, some JavaScript and some CSS. So let's unleash the horde. Okay, so on the left-hand side, we have the actual application. We have our give platform. On the right-hand side, we have the output of the console. So all the reporting is happening. So on the left, you'll see the red circles. So this is the clicker gremlin. He's going around the interface, clicking randomly, trying to find places which can potentially break the application. At the same time, you can see on the right-hand side in the console, you can see where the click is happening, the x, y coordinates. And every now and again, you can also see the frames per second of the application being reported. It's very fast. So maybe you can't see that, but it's all happening up there. Okay, so in this case, the application didn't break. After 60 seconds, the threshold of this application, of this testing suite, nothing happened. The frames per second stayed above 10 frames per second. There weren't enough errors for the MOG Y to step in and halt the attacks. So the GIF platform survived this particular gremlin test, this monkey testing on the front end. Let's try again. Okay, here we have the typer gremlin. So you see the orange circles on the left-hand side. You also see the black letter on the inside of the circle showing exactly what is actually being typed on that part of the screen. And then on the right, you have the console again with the logger showing the frames per second of the application and also showing the X, Y coordinates of that particular type and what was being typed by the gremlin. Once again, this didn't break the application. It survived. There is actually only one input field on this first page, which is the country selector. So as far as input goes, what it could potentially break, there weren't many options. You can also, by the way, shrink this down to smaller versions. You can make a test on a mobile-sized application on a tablet or on a full-size monitor. It basically, it will attack whatever it sees in the browser. Let's try again. This is the form filler gremlin. So it specifically attacks form elements inside of your application. You see a lot of action happening on the right-hand side because this actually, unless you specify where it should attack, will attack any form element it finds. So in this particular case, there are multiple steps to our give platform. So it's actually attacking step two, step three, and step four of the application in the background. A better test here would have been to specify should be attacking the actual country selector in the middle. Once again, didn't kill the application. Okay, the touch of gremlin. So this is the one that mimics the action on a touch interface. So there are several types of touches it performs. One's a simple tap, x, y coordinate, a double tap, and also a multi-touch. So they're the long lines that you see. It starts at one coordinate, one x, y, then another x, y, and it slides between the two with a certain radius, kind of like a human finger would do. So once again, this didn't kill our application. Last one, the scroll of gremlin. We've got success. So this is the gremlin which slides horizontally and slides vertically on the viewport, trying to see if you can get the frames per second down below 10. In this case, if you look on the right-hand side in the console, you see some red crosses. So it actually succeeded. It brought the frames per second down below 10 by sliding around the interface, by monkey testing, by putting the interface under extreme stress. Let's take a closer look. So here we have, this are the last few lines of the logging by the gremlins and by the mogwai. So here we can see, it brought the frames per second down below one. So it basically makes the application unusable. It's 0.86 for the second last one, and the last one is 0.89. But basically, this had been set up to stop the testing after 100 errors. So in this case, there are 100 errors within a space of 60 seconds. And you can see why, it's because the frames per second were well below the cutoff of 10 frames per second. So this is the part of the application we'll then look at to try and improve. We'll look at why did sliding around this particular viewport cause the application to break? And people do, maybe they don't do tests quite as crazy as a gremlin would do, but they do do things with applications that we don't know about and we don't think about, we're actually doing. So how do we then replicate this? So if I ran the test again, were there any extra configurations, we'll do another random test with the scroller. But you can also see the randomizer, which means it'll perform the same random test again. And that way you can test multiple times if this particular test does cause your application to break. So you can actually take away some of the randomness in the random testing. And then I mentioned before, there's also an in-browser version, you can actually bookmark this tool and then run it on live sites as well. So we gave it a go, we tried the WordPress.com site. Here we are. We started, it's more the mobile version, very narrow. And you can see this version, which is the bookmark, actually performs all the tests. All five gremlins go attack the interface. So we've got the slider with the clicker, we've got the form filler, et cetera, et cetera. Okay. So we've covered the user interface. We've covered monkey testing on the user interface by using gremlins.js. Let's move down the stack a bit. Let's move down to the logic layer. So to the PHP layer and PHP unit tests. So who here is familiar with PHP unit? Most people? Okay, great. Use them in Drupal 8 development or in other projects? Great. So you're pretty very familiar then with code coverage. So before we move on to mutation testing, I'd like to cover this topic of code coverage. So according to Wikipedia, it's a measure used to describe the degree to which the source code of a program is tested by a particular test suite. Then I go on to say, a program with high code coverage has been more thoroughly tested and has a lower chance of containing software bugs than a program with low code coverage. Does everyone kind of agree? More or less? Okay. So just to get really clear what this means, this is an example method here called get shapes. And it shows 15 lines of potentially executable code. Now you see 14 green lines and they are the lines of code which have been tested by unit tests. And then we have one line which is red. So this line has not been covered by unit tests. So in total we have 14 green lines from a total of 15 lines. That means there's 93.3% code coverage for this particular method. So this is like the Holy Grail, right? Everyone wants to try hit this mythical 100% code coverage. So every single line of code in your application is totally covered by unit tests. It's a bit of a mythical thing to reach the 100% and then to maintain it. So I think it's a bit of a misguided goal to try and want to achieve because the quality of the unit tests is not measured. And 100% code coverage does not actually mean the application works 100% because we've written the tests to test the code that we also wrote. So it's a bit like this. The alphabet shape game, right? What? If passed the criteria, you use all 26 letters of the alphabet and all 26 shapes. So this passes the test. We use all the letters. But there's kind of a bit of the way unit tests can be as well because they're written by developers. We can also make them pass in certain ways. So unit tests give a false sense of security. Having 100% code coverage does not mean that your application will definitely work. It's all about the quality of the unit tests. So tests don't guarantee correctness. They try to minimize the probability of having bugs. So what are you actually testing? This is a great quote from Benjamin Pollack from Twitter. He said, unit tests are a great way to test if your program can pass unit tests. That is a self-fulfilling test. Bit like this, left lane, must left lane. It's true, right? Doesn't make any sense, but it's correct. And sometimes writing unit tests feels a bit like this. You have so much control over how you write them, you can kind of almost fudge them or bluff them or under times when there isn't much time in a project, you can kind of rush it and guarantee it passes so that all of your tests pass. So with so much power in the developer's hands, who will test the tests? Now, we move on to mutation testing. So it's used to evaluate the quality of software tests. Basically, what happens is you get the original source code, you mutate it, this is called a mutant, and you normally mutate a single line of code in the whole application. You then run the unit tests against your mutant code, and if unit test fails, that mutant has been killed. So what does that mean? You've basically injected a regression error into your original code base, so good unit tests will probably mean that one of those tests will fail because you want to pick up on regression errors, common errors that even good developers put into their code base. So this is a fault injection technique, it's been around since the 1970s. It's kind of picking up more momentum now because there's a bigger focus on test quality, and we also have the power to run these kind of tests as well because they're quite expensive when it comes to process of power. Okay, so unit tests intend to prevent regressions. You don't want a real regression to pass unnoticed, so you want your unit test to fail if some logic inside of your original code base is changed. So just to summarize, before we move on to the tool itself, code coverage is the percentage of source code your unit tests execute, as I showed in the diagram. Mutation testing judges how well unit tests perform and where to improve them. So what are these mutants that we inject into the source code? They're usually commonly found developer errors that we all make, even the best developers. Some of them are simple binary arithmetic, so we take a plus, and we mutate that into a subtraction. We take a boolean, like true, and we mutate that into false, or we take a literal number, like zero, and we take that into one, which is great for arrays, for example. We take Drupal, and we take that into WordPress. Okay, so what is this tool? It's called humbug, humbug for PHP. The slogan of humbug is it eats code coverage for breakfast, okay? It's currently in alpha release, and it's been in work since December 2014, so it's quite a new project. But once again, it's been developed by a very, very good developer. It's by Padarik Brady, who is the creator of Mockery, which is for mocking objects in PHP. He also contributes to the Zen framework, so he knows what he's doing. So I normally install it using Composer. You have to do a slight work around to make the current alpha version work. They recommend installing it via a far install, but there's yet these way I find us to use Composer. The next step, to create a humbug JSON file, which has to sit in the same directory as your PHP unit, XML. And here we set the timeout in seconds. We set the directory where it can find the original source code. And we also set the output of the log file, so all the results of the mutations we've been running. So the first step, before we even start to mutate tests, is to guarantee that all of our unit tests are passing. So here we've taken a small subset of the Drupal core unit tests, 645 unit tests in 25 different files, and run it through PHP unit. They're all passing. So now we can start to use humbug. Notice here that to run all these tests took about two seconds. It's really fast using PHP unit, as we all know. This is humbug. So it does initial scan of all your 645 unit tests. This is really fast. It takes like 14 seconds. It then, the next step, it then goes through and starts to mutate all of your source code. So it looks for the directory we assigned in the humbug JSON file. It takes the source code and starts to mutate it line by line. And it then starts to run all the mutation tests against, so all of the unit tests against the mutated code base. So it's slow. It's really slow. To perform these mutations took about 10 minutes. So two seconds of PHP unit tests to then perform mutation testing on these files took 10 minutes to run. So it's not a really quick solution. But it is good. So here we see the status page of the results of this mutation testing. There were 697 mutations created. And here we have the status. So the top left, we see there's a red M and a white full stop. So the white full stop is good. We want lots of those. That means this mutation was killed. It means the source code was changed. A regression error was put into the source code that's a mutant. We ran the unit tests and the, at least one of the unit tests failed. So that was good. The red M however, shows the escaped mutants. So this is where the source code was mutated. We've changed one line, ran the unit tests and they passed. So something isn't quite right in the unit tests perhaps. The blue S is the uncovered. So these are lines of code which are not covered by unit tests. So in which case making mutation and testing doesn't actually perform any results or have any results at the end. The yellow E is the fatal errors. So by making this mutation, the application then had a fatal error. So this can be a good thing. It can mean that you need to go and fix something in your source code. It might also mean that Humbug broke. It's in alpha, like I was saying. So sometimes things break. The last one are the timeouts. So these are interesting because this is where the mutation then caused the unit test to run for more than 10 seconds because we can figure 10 seconds in the Humbug configuration file. So this could be an infinite loop for example. This is the explanation of what they each are. And then at the bottom of that sheet of that page we then had the mutation score indicator, the MSI. So it's the percentage of generated mutations detected. So kills, timeouts, fatal errors. We also have the covered code MSI. So the covered code mutation score indicator, which is the percentage of code actually covered by tests. So ignore the code that is not tested. This sounds to give you an idea of how effective your unit tests are. If it's 100% on your covered code MSI, then your unit tests guarantee that even when you mutate your code base and you add in regression errors, it still passes tests. Okay, so looking inside of the Humbug log file, we see this. I had a list of all of the escaped mutants and the timeouts, et cetera, et cetera. This is one example. So here we're looking inside of the Drupal component utility safe markup class. So it exchanged a true for a false, the simple Boolean exchange there. That then caused a mutant to escape. This is the original code to show you that this was how it initially was. Now highlighted the line that was changed. And then we have the culprit. We have this unit test, which tests this particular line of code. So this is where you'd wanna start. You wanna start by looking at this particular unit test and understanding why is it that when a certain Boolean was true or false, that it passed in both cases. This we start to dig deep and start to understand better how to write the unit test or how to rewrite the unit test to guarantee that even regression errors in your source code don't occur. So a couple of caveats. So a couple of things to know. It's very time consuming. Like I showed before to run 645 unit tests took about two seconds. Whereas to run the mutation testing took about 10 minutes. So it is really time consuming. Your unit tests must fully pass before even beginning or else the mutation tests don't make sense. You can have incomplete or skip tests. That's totally fine. And there are some false positives which do occur with mutation testing as well. Okay, let's move down the stack even further. So we've covered the interface with Grammins.js. We've moved further down the stack to the logic layer using Humbug. So now we're down to the infrastructure level. So now we're gonna use a tool called Chaos Monkey. So we're back to the monkeys again. So back in 2010 Netflix was one of the very larger organizations to start to look at relying solely on AWS web services to run their platform. They were doing things previously in-house with their own data centers, which is quite expensive. And they're looking at ways of making their system more scalable and increasing the fault tolerance of their infrastructure. So they said fault tolerant architecture is not enough. Constant testing, the constantly test ability to survive. They were to make sure that when things did go wrong they were fully prepared. They wanted to keep on testing their infrastructure. And I said the best way to avoid failure is to fail constantly, not to fail, not to test failing, but to fail constantly. So they built a set of tools to help with this high fault tolerance system. They assembled this army, the Simeon Army, which is a collection of infrastructure tools that work on the Amazon web services. They open sourced some of them. So there's actually three which are available through GitHub. You got the Chaos Monkey, which I'll be talking about today, which basically destroys Amazon instances. You have the Janitor Monkey, who searches for unused resources and cleans them up. The Conformity Monkey, which finds instances which don't conform to their best practices and it kills them. They have lots of infrastructure, huh? And there are also a few which are closed sourced, but they've discussed how they function. Like the Chaos Gorilla, so it's bigger than the monkey. This guy destroys whole regions of their AWS instances. There's some other ones as well, like the Doctor Monkey, Latency Monkey, who alter the latency between their networks, or between their servers, and also the Security Monkey, who does security inspections. But let's focus on the Chaos Monkey. This is the guy. So it's available on GitHub at that URL. And it was the first thing that their systems engineer developed. So when they had decided to move from their internal infrastructure, their internal data centers to Amazon, their lead engineer decided to make this tool which destroys. So before mounting too many instances in servers, he first wrote the tool that would guarantee they're constantly being destroyed. So what it does, it seeks out auto-scaling groups in Amazon and it terminates instances within the group. And it measures and monitors to make sure that everything else still works. In the first year of using it, it's when they first migrated to Amazon, which is 2010, they terminated over 65,000 instances in production and testing environments. So I can't like this flat tire analogy. So imagine you have a flat tire and you need to fix it. A few car questions spring to mind. Like, do you have a spare tire in your car? Is the tire inflated? Do you have the tools to change the tire? Do you even remember how to change a tire? So there's only one way to kind of really test that is to get a flat tire. So you can puncture your own tire and then make sure you have all these things in place. You'd even do it regularly, once a week. It could cost you some money, but you could test if you can always change a tire like this. So basically Netflix puts holes in their own tires almost every single day. So this changed their way of also organizing their applications. They have what they call a Rambo architecture. It basically means that each system has to be able to succeed no matter what, even on its own. So one whole service goes down, but the core service still stays up in functions. So for example, if the personalized video pick service is unavailable or goes offline, they'll show an alternative like the most popular titles. So to the user, they'll be able to even notice. And it's helped them kind of find surprises, allow them to isolate and resolve problems within their system and their applications. So they run it in the middle of a business day, but they're very careful about monitoring what's going on whenever they do run it. They always have expert engineers on hand. So if something really does go wrong and what they're trying to test, they can also resolve it. But pretty more importantly, they resolve that first time, but then they start to automate that resolution being done for the next time that tests a particular part of their infrastructure. They're building automatic recovery mechanisms all the time. So, just to summarize what we went through with this session, we covered the front end. So using monkey testing with the Gremlins.js library to test your front end UI. We then went through and looked at the logic layer of your application. So with mutation testing for PHP. We then finally looked at the infrastructure testing. So using the Chaos Monkey and destroying your AWS services. At the beginning, I talked about the infinite monkey theorem which kind of inspired the collecting of these tools. So the infinite monkey theorem once again states that a monkey hitting keys at random on a typewriter, a typewriter keyboard for an infinite amount of time will almost surely type a given text. So they actually did this in 2003 in a zoo in Devon in the UK. They had a performing arts project. So it wasn't a scientific study. It was a performing arts project. But they put a keyboard inside of a monkey cage with six monkeys for one month. So it's not quite the infinite monkey theorem, but they're curious just to play with this kind of idea to see what happens. So would they be able to type the complete works of William Shakespeare or maybe a page of it in one month? So allow me to present to you the notes towards the complete works of William Shakespeare. Beautiful, lovely, leather-bound cover, gold lettering. The first page acknowledges the authors, Elmo, Gum, Heather, Holly, Mistletoe and Rowan, the six monkeys. So the experiment began. The first thing that happened was the lead monkey walked over to the computer and smashed it with a rock. The rest of the monkeys began to then use it as a toilet. But then, in a space of one month, they created this. A bunch of Gs and Ss. That's the first page. Then they kind of liked the letter S on the second page. Bunch of Ss and Hs. Are we almost there? Is this almost Shakespeare? Bunch of rubbish. I think I'll kind of create it at the end. And then we see the monkeys in action also here as well. And that's it. They made five pages of complete gibberish. But they announced they learned an awful lot. And the conclusion was that monkeys are actually not great random generators. They're much more complex than that. Thank you. So thank you for attending. Don't forget to vote if you liked it. It's a quick announcement that Drupal Dev Days next year in 2016 will be happening in the country where I live in Italy. So it's in Milan in June. So I don't know if you guys have been to one of the Drupal Dev Days before, but this year's was in Montpellier. I was talked about this morning before the keynote. It's a great chance to get your hands onto code to really learn. The focus is sprinting. But there are also some sessions happening as well. But if you're a beginner developer or you're a more advanced, it's a great place to come down and spend five, even up to seven days with people developing code. Well, there are also some stickers here. If you want this for the Drupal Dev Days, you can come up and collect them at the end. So any questions about monkeys? Or do they answer them all? If not, you can come and talk to me afterwards. I answer your questions. OK, great. If you want to reach out to me, feel free to reach out to me, email me, hit me up on Twitter, or you can come pass and see me after the session now. Thank you.