 So good morning everyone. I'm going to start the presentation because we are just in time. So welcome to Automated Testing 101. I am Ezequiel Segui-Vazquez and I'm going to present you this session. First of all, let me remind you that on Friday we're going to have three different sprints so just take a look to the website if you want more information about them. You have the locations and even the hashtag. So yeah, well, so this is Automated Testing. This is an introduction, an introductory session for testing on projects and to be specific on automated testing tools and how you can apply them to your project so you can improve your software quality. This is part of the DevOps track, sorry, and this is the URL in case you want to later download the slides. So first things first, this is me, Ezequiel Vazquez. I am a backend developer. I have been working with Rupal for like seven years or maybe six, not seven yet. I work at Lallabot and I have some other background about system administration and DevOps and well, I am very interested on hacking and security, specifically on web security. I have done some audits and I really like to have some fun on these tasks and recently I have discovered the testing techniques and again I am really enjoying, we have been applying these techniques that I am going to talk about on a project that we have been working for this last year on Lallabot and basically I want to share everything that I have discovered and learned so you can put them in practice too. And yeah, on summary, we are going to talk first some minor concepts about software quality, what is software quality, how can we apply some measure techniques to determine the level of quality that our software have, well, basically some concepts. Then I am going to go deeper on different testing techniques that we can use to cover our application. Then on the third point, I am going to live demo three different tools. I have videos, so in case everything goes wrong, everything is covered. And then I am going to talk about a bit about how we specifically implemented these techniques on our last project. So, let's start with the first point, software quality. So, basically the first question is what is software quality? So, quality is the ability that one product can have to address a problem or a need. And so, more quality on a product means that the solution that this product means for the – sorry – so, if we have a high quality product, we have a high quality solution. This means that the solution that we propose to solve the problem is the best fit for this problem. So, software is just a product. We are working on a product and we want it to have the higher level of quality possible. So, what we want is to have the best solution possible for this problem that the client proposed to us. Just a quick note here. QA means quality assurance and this is basically to be confident about the level of quality that we have on our product. And also QA refers to those people on our team that watch for the quality and basically they execute the test and they tell us as developers, okay, this is not working properly or this is. So, sometimes we may feel like they are quotes an enemy because they tend to reject something that is not 100% working but we have to think that they are our allies in fact because it's preferable that they discover any bug so we can solve instead of the client or even the final user discovering this bug. So keep that in mind, please. Regarding how quality can be measured, we can determine, okay, how is the level of quality for one piece of software? Well, we need some metrics, a metric is nothing else than something that can be measured and depending on the type of application that we are working on, we can use different metrics. We have to be smart about the metrics that we are going to select and once we have done this selection of metrics, we can consider this our testing framework. Basically we can divide the aspects of software that we can measure into different groups, one of them and the most usually the one that we take care more about is functional testing. We want the software to do, to work exactly as it was designing. I mean, if the client asks us to implement some software to solve a specific problem, we have to have, I mean, we have to work so the software actually solves the problem. But there are some other non-functional aspects that sometimes may be forgotten like performance, security, usability, accessibility, et cetera. Well, this is, I just wanted to mention that we need to take care of this non-functional aspect too and we can test these aspects so we can increase the level of quality of our software. So, just a quick more note before we go to the testing techniques, how can we integrate the QA on a project? Well, this is responsibility of all actors on the project, not only the developers, not only the QA team, but we, all of them, all of the participants of the project, we must take care of the quality. It's really important to imply the client because if we are working in one direction, the direction that we as a team consider that is the appropriate for the project. But the client's expectations are in other different directions, we're going to have a problem. So we have to imply the client when we are talking about the quality because, I mean, we want to meet the client's expectations. Like this image, I use this image because I want to mention something. We should not leave the test for the final. We should be testing in all the different phases of the project. Because every change that we implement on the project, it's going to have an impact and we want to reduce the impact. So and also, we want to avoid the look for whom to blame. We don't want to fight with other people on the team just to tell, no, it's your fault that this is failing, no, okay. Let's apply a proper QA process so we don't have to blame anyone. We just have to release a good project and hopefully we can prevent something like this. We don't want the developers to give a poison gift to CIS admins so they just have their problem on live servers. Okay, it works on my machine so it's not my fault. No, that's not correct. We have to work together as a team to prevent this. We don't want terror stories, if possible, so let's try to avoid it. That's it. Okay, that's just the brief introduction. Now I'm going to start talking about real testing techniques that we can use and that we have been using on this project on Lullabot this year so we can improve the quality of the software. So the first technique and this is not something that can be automated or something that can be done just by the machine, it's peer reviews. By peer reviews we can understand the concept of, okay, I am a developer, I am part of the team and I want my mates to take a look to my solutions, to my code so they can confirm that the solution is fine for the problem. This basically means that we are sharing knowledge. We have some different persons on the team so if I get to a solution to a specific ticket, I might have been, I might have had a bad day or maybe I just forgot to take into account something so it's good that another fork from the team just take a look and tell me, okay, you have forgotten to fix this acceptance criteria point or maybe instead of having this big function you can use this couple of lines so we can improve the quality just because four eyes are better than two eyes. And well this is basically a manual review. What we did is every time that we created a pull request against the main branch, we used it to have some testing instructions on the ticket description and step one do this, step two do this so we can just check others works, others work. So the main positive, another positive point here is that we are making all the team responsible for all the code so it's a great tool to collaborate more openly and yeah, we want to do that so everyone can enjoy the project. Yeah, this is not automated but this is a great tool to improve the quality. So code linting is another small tool and well we want to avoid this. I don't know who did that, I mean why put the red square and the black one, why? I am not okay with this. So imagine this on your code. I bet that most of us that we are in this room we have been on some teams on some projects where everyone just coded one, for example, this man using tabs, this woman using spaces, four spaces, two spaces, the curly braces in the same line, the curly braces in the next line, so we want to avoid that. We want to have some sense of the same code. I mean if I'm going to open a code file that I have not been working on yet I want to know so what I'm going to find. I want to have some expectations and those expectations on the format to be fine. So basically code linting is a tool that will review all the code files and tell us, okay this is not good to the standard because on this line you have to change this and in this line you have to change that. With this tool you can get a cleaner code and well for example on this project that we have been working on we have been using Node.js and React and we have been forcing every time that a developer creates a commit we had the linter executed automatically just to confirm that all the files on the project were respecting ES6 standards. So at least we can say that our code is clean and it's really nice. So that's our first automated testing tool. Then I want to talk about unit testing. I guess we all know about unit testing. Please raise your hands. Probably we all know about this. If not you should start studying a bit about unit testing. Anyway this is an introductory session so let's go for it. So unit testing is, I mean imagine your software, the modern software is not just a monolithic piece of code. We have some logical structure and we have our software divided by components. So imagine one component will take responsibility of doing one specific task and we want to be able to test each component just with no interactions, no external connections, anything. Just the component itself to check that the logic implemented on this component is working properly. So this is unit testing. This is what unit testing is for. So it's relevant. Again I want to mention I want to repeat no connections, no external connections, no relationships with some other components on the system. This is important because having external connections makes this to not be a unit testing and to prevent imaging, let's imagine that we have a component that has a function that needs to interact with a third party service, an external URL that we want to consume to get some content or whatever. So to prevent this to be not, to not be a unit testing, we can use some tool called MOC, M-O-C-K, and MOCs are like a simulation. Okay? We can define this MOC. Imagine that we have a component that will consume an URL to confirm that we are logged into an external service and it's returning a JSON object. So with the MOC technique, we can just simulate the external connection and we are going to return a JSON object that we have on our repository, on the testing utils directory for example. So the component thinks that the connection, the external connection was already done, but in reality we have simulated the connection. So we can actually test all the cases, good responses, bad responses, et cetera. Okay? So this is another point that I want to mention. We want to be able to test both positive cases and negative cases. This is really important. If we have just a test for the positive case, the happy path, we're going to be able to confirm that okay, when the input is intended, the one that we are expecting, then it is working. But what happens when the user does something like this? The glass is not intended to be used like this. And we want to be able to test that if the user does this, the glass is still working. So yes, please, let's use, I'm going to repeat this for some times on the session, but it's important that we cover both happy path and error cases. This is really good to prevent regression bugs. I'm going to talk about regression bugs a bit later. And just one more point related to unit testing. Please, please, please, please, please do not use unit testing to cover use cases. Okay? We are going to cover use cases, full use cases on end to end testing. So that's it. So now we have been able to test a specific component on our system. Now we are going to go one step further, one step forward. And we are going to talk about integration testing. What happens if you have, all of your components are working nice, all the tests are green, everything is working fine. But we have cookies and we have cream. Okay, cookies, the cookies are just fine, they are tasty, they are good. And the cream is also good. But when we are going to integrate on this image you can get, we don't have different cookies. We have one cookie because this is cookie cream, cookie cream, cookie cream. So we have a big cookie with a lot of layers of cream. And this is a failure on the integration testing which we were expecting cookie cream cookie and this is just one cookie, one Oreo. Anyway, I think you can understand how this works. And integration testing is just to study and to confirm that the relationships between the different components and we are talking about a subset of components. We don't need to implement an integration testing with all the components of the system because we have end to end testing later for this. So we just want to confirm that the different, the relationships on the, the relationships is, sorry, the relationships between the different components on our system are working as expected. Think here of OOP and specifically on dependency injection. If we are going to inject an object that is a dependency on another one, we want to be sure that the relationship between them is going to work properly. So we are using integration testing for this. And yeah, basically that's it. This can help us to check implemented patterns if you are going to implement some factory or whatever. These patterns can be checked with the integration testing. Just try to avoid the big Oreo cookie. And now we are going to talk about end to end testing. This is one step forward. We are going to test here the full use case for a specific use case. So any of you imagine how would end to end testing on a bomb would work? So Baxbani in this gift is checking if the bombs are working in an end to end, using end to end testing. The problem is that if the bombs work, then we have no more testing. That's the problem. But I mean, it's real end to end testing. Let's talk about a bit more about this. So end to end testing is the best way to test full use cases. Imagine that we want the typical, archetypical test case to use the end to end testing is the sign in or sign up. A user comes to our website and then the user wants to create an account. So the user goes to slash sign in, for example. The user will fill the username and password fill. We'll check whatever, accept, please accept our conditions, whatever. Don't read this, but accept it anyway. And then click on submit. And then the account is created. And we want to check that the user is actually logged in. Okay, so this is the typical end to end testing, but it's a real use case. Just one thing in mind here. We have to try to cover all the cases for all the roles on the application. In this case, we are not using mocks. So if we are using third-party integration, we are going to use the real third-party integration. Just think of end to end testing on, like, if you put the test directly on production without fear. Okay, because you are going to replicate exactly the same behavior that the user will have on our application. And as I mentioned on the last point, we are going to use test data. So we should be able to generate some test data. Think of double generate, for example. Then we are going to execute this end to end testing so we can cover all the cases and then we are going to remove the testing data and everything is fine. So we can confirm that everything is working as expected or not. Again, this is a really important happy path and error cases. We want to be able to determine if, not only if the test case is working properly on the happy path, also we want to check the errors because if imagine that we are close to the release and we just discovered that, okay, this error case is not being handled properly. So we have an error. If we have an end to end testing to cover this error case, we probably will know about this earlier and probably this is going to be fixed at that point. So regression bugs, those are like worst nightmares, to be honest. You want to avoid the regression bugs as more as you can because, well, a regression bug, let me define it, is just an error where imagine you have implemented something and this something is working properly. We have green tests and later on, on the next release, you have this same functionality broker. So this is a regression bug. And you want to prevent them and to do so. You want to have proper end to end tests. So that's it. Now let's change. We have been talking about functional testing. Let's talk about now non-functional testing. And first of all is performance. You want your application not only to do the stuff that is designed to do, but you want it to do fast. You have to think only when you search something on Google or on your favorite searcher, Jane, if you introduce some search criteria and click on the first result, if the first result takes more than five seconds to load, we start to be nervous. And we think on go back and then try on the second page. There are studies that demonstrate this. And if I remember correctly, five seconds is the wall where our application is taking too much time to load. So the simple metric here is the page load time. According to some different people that I have been talking to over the years, 800 milliseconds is like the average time that our home page should take to load. So the user does not get nervous. Again, let's use real use cases here. We are going to demonstrate later Gaddling, which is a tool that allows to implement the use cases, like an end to end testing. But it will collect some metrics about the page load times, the resources, and so. And yeah, performance is the most relevant non-functional aspect of the software because it impacts directly on the user experience. You can have, okay, the user to create an account only have to do three clicks, for example. But if your application is taking ten seconds between each click, then the user is going to desperate. So let's try to avoid this and let's try to confirm that the application has a proper performance. And it's absolutely related to the performance we want to check scalability. Scalability is the ability of the system to maintain the performance level, the speed level, let's say, as long as more users are coming, more and more users are coming. You can think of scalability testing as low testing. We are going to bomb the application with a lot of concurrent users just to check that the performance is going to be constant, imagine a graphic is constant, while the amount of users is going to increase. That is the ideal for this. So if we want to use this testing to help capacity planning, I mean, we are expecting, imagine, let me put an example. Think of the web page that sell tickets for live concerts. This is something that happened to me. Let's say that Metallica is coming to town and they are going to sell tickets on a web page and the tickets are starting to be sold tomorrow at 10. Well, probably the page at 10 is going to be down. And I suffered that by myself. I only have one ticket, so I have to go to the concert alone. Anyway, this is a failure on capacity planning. If we are expecting, like, 1 million of users, unique users I am thinking of, and all of them are going to be logged in users, authenticated users, we need to add more servers and more load balancers, et cetera. But first of all, once we have this infrastructure ready, we have to test this. We want to generate 1 million of fake concurrent users just to check that everyone that wants a ticket for the concert will have the tickets for the concert or not, depending on the number of tickets available. And another important use case for these tests are the service level agreement, SLA, especially on cloud environments. You know, the cloud providers, they tell you, okay, we are going to be available 99.99% of the time, so you are going to have only downtime 8 minutes per year. And you say, okay, but what if this is not true? Okay, because the provider is not going to tell, oh, sorry, we have been down for more than 8 minutes a year, so here is your money. No, they are not going to do that. So you have to be able to demonstrate that this is happening or not, so you can confirm that the SLA is being addressed properly. And we can use the scalability testing for this. We can just try to bomb the application or the infrastructure, and we can be able to, well, we can combine this with active monitoring to confirm that the SLA is going to be respected. Yes, security, we have to talk about security, of course. I think any of us, probably, we don't want our applications to be hacked and have the database pasted on base.bin or some place like that, so we want to avoid this. And security is on this new moment, sorry. On last month and last year, security is being more important because every time that a website is hacked, the reputation that they suffer is, they suffer a big loss of reputation. And, well, we are talking about millions of people affected on the hack, too. For example, you have been, I don't know if you have been aware of the C-Cleaner hacking, they have basically hacked the repository, they have injected some malware on the proper tool, and everyone that has C-Cleaner installed or updated from last month to September has been infected. So we're talking about millions and millions of people. We want to prevent this. And talking now, coming back to our project, we want to apply this security aspect, this security testing as an aspect. The most typical thing that I have seen on different projects is we're going to develop everything, and when we are close to the release, then the red team is coming, they're going to try to do some pen testing, they're going to give us a report, and then security is done. But this is not how it works. We should be applying a pen testing before each release. If we're going to release, for example, a new deployment to production, each month we want the red team to do the pen testing on the new code every time, I mean, for example, one week before or a couple of weeks before, just after the code freeze happens. Because if you audit just before you go live, but you are keeping, you're doing more releases, then this new code that is going to be deployed is not going to be tested. And the ideal testing for this is combined code audit. I mean, some expert both in security and in the language should review the code. This is static review. And then the red team should do a pen testing. And, yeah, the backups. How many of you have tested backups? Good. But everyone should have raised their hand. Think of GitLab, I think, they removed some directory on live server by error, and some metadata of some projects got lost, and they tried to restore up to five different backups. But the backups were not working properly, so they basically lost the data. And this is something that we want to avoid. So please check your backups. If not, you're going to have Schrodinger backups. You don't know if the cat is alive or not until you open the backup. So please properly test your backups. Just do a simulation. Imagine that someday, okay, production is lost. Let's try to restore the backups and just check if you can do it. It's simple. And you will want to do that. And just to continue, and we are very close to the moment of the live demos. I want to mention also usability and accessibility. Usually we, as developers, tend to not have this into account, but we require, we should be using the services of UX experts just to determine the best way to interact with the users. So if we're going to interact with the users, we want the real users to test our application. Someone told me that the best option for this is like a beta release with some specific group of users and just let them play with the application, observe how they interact with the application, and of course, well, you can ask them, but as Dr. House said, everyone lies. So the best approach for this is just to collect the data from the users and just see, using this data, how the users interact with the application. And accessibility is a part of the usability tests. We want to help people with limitations to use our application. There are some standards, and if you are working with public companies or public authorities, you probably are going to be in an, I mean, you probably are going to sign a contract that says, okay, you have to respect the standard, double A or triple A just so everyone can use the application properly. To do this, there are some good tools, but, well, if you have a checklist and an HTML checker, just to confirm that all the image tags has title and alt text, and all the links have title, et cetera, et cetera, et cetera, that is a good first step. But I recommend you to have a new team to confirm that this is going to be respected. And, yeah, if you don't have enough with these testing techniques, most of them can be automated, and we are going to see some samples now. We have some other options here. Regression testing, you can imagine this is to confirm that nothing has been broken since last deployment. Acceptance, the set of tests that is going to be done before the release or should be done before the release. AB testing is just some kind of testing that will confirm the differences between the two different versions. And, yeah, basically that's it related to the testing techniques. So let me now go to the automated testing tools. I'm going to demo three of them, and first of all is unit testing. Most of the modern languages have support for unit testing, and you should be implementing everything should be covered. I have not talked yet about test coverage concept, but basically if you have a component, imagine with five functions, you have to have tests for all these four functions or five functions or whatever. But be smart. Just don't try to test everything in the unit test because you're going to have useless test cases. Okay, so let's go for the demo. Okay, this is here. So imagine that we have, can you see this or should I change the color? Let's change the color. So I can, sorry for this, I should have changed the colors. So let's use, yeah, one second. Anyway, I'm going to increase. Better? Well, if not, I will try to just, yeah. So well, this is a library where you can see some different form validators. Wow. So, yeah, no, I thought that we had some, a bit more of time. So, yeah, for example, this function is going to check if some value is numeric, if it's not ending and it's not a numeric value, then we're going to share, to show an error. If not, we're going to return false. Okay, it's basically, in this case, email or mail, if the value is not empty and it's not an email, then we're going to show an error message. It's basically the same philosophy. And this is the code that will test, this is some unit testing implemented for this. So, in the upper part, we have the check for isEmpty. IsEmpty is a function that should return a volume and we have some here, some sample values. So, we're going to map the input to the expected output. So, if we receive a one or an A, we're going to return false because it's not empty. But if we're going to pass empty string or undefined or null or false, we're going to consider that the variable is empty and we are going to expect true from the isEmpty function. The same, for example, for numeric, we have three different inputs, A, one with single quotes and an empty string. So, in all cases, we are going to expect false because, well, this is basically a number even if it's unquotes and we want to allow the empty string. But on the case we receive an A, we're going to show the error text. The same for email. We're going to map the different input. The two first inputs are not real email addresses. So, we're going to expect the error text and on the second, well, the third and fourth examples, we're going to expect false because they are actually emails. And you can imagine you can just, okay, I'm going to execute the test. We have some tested implemented here. You can imagine you can test basically whatever. And this is the expected output. Everything is passing. We have 12 different checks and the philosophy is that we are expecting, we are going to use some input, happy path and error cases. Then we're going to map to the expected output and then if everything matches properly, that means that everything works and we have all those green checks, which is what we want. And it's really simple. In this case, we have confirmed that our form validator is working. Now let me go to the network test. We have a Drupal 7, sorry for that. Okay, come on. While this is loading, let me just show you the code of the network test. So a network test is really simple. You are just going to export different functions and each function is a test case and each file that contains these cases is a test suite. A test suite is just a set of different cases. You can see here the code. This is absolutely simple. Network uses, can use Selenium as the engine to recreate the browser or can use WebDriver, which basically this is a headless browser that we can interact with using this code. And what we are going to do here is to check that the user can sign up on a form. So we are going to open the browser, then we are going to use to the home page. Then we are going to wait for the, well these are CSS selectors. So once we have told the browser to go to this URL, we want the browser to wait until the HTML is rendered and in this specific case, the ID block user login is present. When the element is present, we are going to wait 200 milliseconds and then we are going to click on the link that is on this CSS selector. This is basically create a new account on the Drupal login form. Then we are going to wait for the user register form. Then we are going to set the value user name on the edit name. Well you can imagine that this user is going to behave like this. We are going to click here and then fill this value, then click on the submit button and we want to expect the different sections of the page to be present. That's it. We don't have any other, any other, okay. So time to use the plan B. Let's see how Nightwatch works in a video. So I am going to execute on the right Nightwatch. On the left you can see the browser is going to fill everything. It's going to be a bit fast. But you can see that every test case is going to be executed automatically. I am not doing anything. I mean you can see the browser is executing faster or not for a person. But it's going to check everything. So just let me show this frame, last frame of the video. I want to show you this is the output of Nightwatch on the right. And you can see that every test case is running, name of the test case, and then the different asserts that we are going to wait. I mean for example, if you remember that wait for element blah. So we are going to confirm that everything is working as expected because everything using the CSS selectors is going to be displayed. And we are going to do the automated testing tools. I mean we have been able to see the browser. But if you can execute this absolutely headless with no browser, so you can integrate this with Jenkins, with Travis, and with other tools. So it's been 45 minutes of the talk. I was thinking on presenting also Gaddling. Okay. I can do a quick demo. But I want also give you some questions. So I'm going to present Gaddling very, very quickly. In this case, Gaddling is again, I should not have changed the background. Okay. Gaddling is written on Scala. It's basically Java. But you can think of Gaddling on the end-to-end testing is the same. Gaddling is used to do performance testing using end-to-end cases. So you are going to declare a class. In this case is the class basic simulation. This class is coming with the same tool. Okay. This is the basic simulation. So you are going to configure the HTTP headers and the different options. Then you are going to declare a scenario. In a scenario you are going to say, okay, we are going to execute a call against the homepage. Then we are going to pause for some milliseconds or seconds. Then we are going to get this URL. So you can imagine you can simulate the navigation with this simple syntax. And for example, here at the bottom you have oops, here you have how you can use a form. You are going to tell him, tell it, okay, we are going to post to slash computers. We are going to use the headers that we have defined at the beginning. And this is how you add form parameters. Name is beautiful computer. Introduce it is the date. Companies 37, etc. And another important thing here is we are posting a form. So we are going to add the content type application from URL code header, in this case only on header stem. So if you execute this, the basic simulation, so this is going very quick. This is the output. And basically it is telling us that it is executing the different. Okay, so it looks like I cannot connect. So let's use the video. So this is the same. 41 seconds. Once we select the test case that we want to execute it will run. And the output that we are expecting is something like this. Global means the number of requests that we have been executing. And then we have the other requests. I mean, everyone of the individual requests that it's going to be execute. What is really interesting here is the last part where we can see the summary. And we have here on this part some statistical information about how many times it took the maximum, the minimum to contact the server. And we have a lot of different concurrent users so you can create your own performance testing suite. Okay, and well, there are some other tools that you can use like Behat, G-Meter, Burp Suite for security. I have been checking about accessibility. I have not used this tool, but Cynthia says this is a proper tool to confirm that you are applying the American standard regarding the accessibility needs. And, well, you will need some orchestrator to run everything on your project. You can choose your CI-favorite tool. For example, GitLab or Jenkins. We were using Travis for this. So, very quickly, I'm going to finish now. How did this on our last project? This is our real experience. What we did is every time that a developer creates a commit locally on the branch, we forced the linter to be executed. If the linter does not pass, then you cannot commit. Then when you have all your commits on your branch and you are going to pass your branch to the server, to the repository, then we're going to execute the unit tests. If the tests are not passing, then once you have your branch on the server and you have created your request, then Travis is going to execute both the linter, the unit testing and the end-to-end testing with Netwatch. And you can see here on the right side, we love green because that means all test based. We have the Netwatch test separated in four different batches so we can execute them in a parallel way. We have a branch protection enabled so for your code to go into the repository to be merged, you're required to be passing all tests on Travis and also you're required a review from any other developer of the team. And that helped us a lot to prevent regression bugs. We had some problems because at a couple of points with the heavy pressure of the day-to-day, we were forced basically to merge code that was not passing the test and that gave us some headaches but what we did at those points were tomorrow I know that I have to reserve a couple of hours to fix the tests and then have everything green again. But this is really great because we have been suffering from regression bugs until we started to use this software. And we were able to focus on create software with high quality instead of worrying about okay, this is broken again, this is not working properly, etc. And yes, to finish on summary just please implement this kind of testing techniques on your software so you can be confident about the high quality of your software, use the automation test and you can review everything but if you can help them, if you can give them support with this automated testing or even if the QA team can use this kind of tools that is great because that gives them the power to accelerate their work. This is part of continuous integration so you need to implement continuous integration for this software. And this is part of our non-functional aspects of the software, not only functionalities. So thank you for coming. We have like 10 minutes for questions, a bit less than 10 minutes. So that's it. If you have any questions. Thank you. No questions yet? Okay. So you will recommend for example Nightwatch instead of Behat? Yeah, in this case we wanted Nightwatch because it fit our technologies better. We started using it. It's simple to use and we decided to use it. Behat in fact is writing in PHP and in this case we were talking about Nightwatch, sorry we wanted to continue with JavaScript. In summary, it's been a really good fit for our project. It allows us to reproduce the exact behavior of the user and that gives us some advantage. Okay. And for testing when you push to the remote environment, do you have a database that you already prepared? Can you repeat the question? So when you are executing the tests, are they running against a copy of the database that you create when you deploy or in the actual database of the environment? In this case, well in our specific case we didn't have a database as is. We had some different environments so the tests were running against acceptance I guess. But as I mentioned on the presentation this is nothing to worry about because if this is well implemented you can create test data test against any environment and then remove the test data. In our case we had a specific environment for testing but you can execute this on life if you are brave enough. Thank you. Thank you for your presentation. Thank you for coming. I wanted to ask you, what do you think about Tauros? Tauros, Tauros. It's a tool to warp jmeter tests. Do you use it? Something like that? So about gmeters, what do you want to ask? Tauros is a system to warp gmeters tests if I keep simple. I just wanted to ask if you use the Tauros somewhere in your system. No, not really. I have used jmeter, but just as is with no other software. You can check it. You can drop the gait link and the jmeter test in the same format, YAML based format. If you can give me the exact name I can take a note and I will take a look. Thank you. Hi, nice presentation. Thanks. We tried with jmeter but we didn't like it. At the end we are using Locust for performance testing. This is the biggest concern of ours and Locust is working fine with us. For us, but when we see issues it's hard to detect them. Where is the performance issue? I think with some caching things we are resetting cache a lot. We don't know why and I'm curious about your experience with this. With jmeter I must say that I don't like it because it's hard to create the test cases because you have to interact directly with the server in the sense of you have to define the HTTP calls directly. I prefer not to watch for this because you can in the same sense of not to watch, you can define your use case. These are real use cases and you can execute them. How can we my experience related to how can we fix the problems that we find? Well, this depends on any case but if we detect some problem on a specific URL or on a specific process to reproduce it locally for example or on an environment and if we can detect the specific URL or the specific process that is causing the performance failure we have to use profiling there for example if we're talking about PHP AXH prof then you can use this tool to debug and find which specific function is taking too much time to execute and of course another option that is complementary is the static review I mean if you have coded if you have implemented this part of the application maybe another folk on the team can take a look and probably he or she will find some improvements to your solution that's a possibility but basically that's how I face this kind of issues I don't know if this is what you expected thank you does anyone have another question? yes, no we have two more minutes well if not feel free to stop me on the hall and ask me whatever I can share a coffee a beer or whatever this is my twitter account at rabbitlayer feel free to ask anything thank you for coming thank you for being so nice audience that's it, thanks let me take I'm sorry I had a pen here okay I understood like that because you said that every time you have to use that once every four, five, six months you're like thanks to these two