 And today I want to talk about the reports. I saw a lot of interest in this topic and it's not surprising because the good reporting is one of the key success factors of your test automation. The good report will save your time and the bad one can ruin all your job. My talk today will be divided in three parts. At first, we'll talk about reporting challenges and problems. Then I will describe our solution called Allure. It's open-sourced framework. And I will finish with a short demo how to use our framework with your JUnit test. So, let's go. Raise your hand if you ever created the test execution report. A lot of people are here. And raise your hand now if you think that your report is really nice and clear and everyone can understand what's going on. So, a few people. Okay, and maybe you think it's a crappy one. Yeah, a bit of criticism here. Okay. So, what is the problem? Why so hard to create the good report? I think the key feature of good report should give you the speed and precision in bug detection, in problem detection. Let me explain what I mean. When you write unit test and you run it and it fails, you get the exact line of code where the problem is. When you run API test and it fails, you have not only the code you interact with but container running your code, the server running container may be the balancer. And every part can be broken. So, you should aggregate some additional information such as request, response, dumps, headers and so on to find where the problem is. And things go on even worse with web test because you have your code, container running your code, the server running container, the balancer, browser, HTML, CSS, JavaScript and even Selenium itself can be broken. And you really need a lot of information, aggregate and then display to find out where the problem is. But usually you just attach a screenshot. This displays you what is the problem but it's not explaining why. And here it comes to the second major feature of a good report. You need to collect and then present really a lot of information of your test execution. And the third challenge, you should present it in a different ways for different people. Are there any managers here? Test leads, managers? Put your hand, okay. What do we want to see as a manager in a report? Yeah, you have to see overall information, right? The big green or red button, it's everything or everything is broken. Are there any manual testers here? Okay, what do you want to see as a manual tester? You want to see the reason, you want to see how it's broken. You want to see the test scenario itself to verify what you ask for or not and why it's broken. So actually you should present the same data collected during test execution in a different ways for different people. And actually you need different reports for these people. And the last problem that you have no time to do this, right, because you think, okay, I wrote a lot of tests already, I will handle the reportings later, but usually it will never happen. With this, all the things in mind, we decided to create an open source solution, which called Allure. Allure is a French word, which means how the horse goes. And we call it because we use steps as one of the basics of our framework. I will explain it a little later. The second major feature is the cross language. We use three languages basically in our test automation. We use Java a lot. We use a little Python and we use a little JavaScript in our test automation. So from the very beginning, we decided that our framework should be cross language, language independent. And the third major feature of this framework is easy integration. We have a lot of tests. And the more tests you have, the more time you spend trying to adopt this test to new technologies, new frameworks. But in the end, it's only reporting and you shouldn't spend a lot of your time just to get the nice report. So it's easy integration. Here's a link to a demo report. You can open it on your laptop right now. Or if you have iPad or mobile phone, you can open, it will work on mobile devices as well. You can play around, maybe provide some feedback box on GitHub. And to understand how we made all this, let's talk about architecture, framework architecture. Basically, it consists of three parts. The first one is adapter. The second and the central one is a model. And the third one is a report itself. Let's start with the model because it's a central part of framework. By model, I mean the data presentation model. We don't want to start completely from scratch because there is a lot of test data models already exist. And the basic one is a Xunit model. Who knows the basic concepts of Xunit? Any suggestions? Okay. The first one is test suite, which actually connected with the test class usually. And the second one and smaller one is the test case, which connected with test model. And there is how it looks in XML. When you run your test, you get this standard Xunit output. Let's see what we have here. We have a time, which is actually test execution duration. We have some names, test suite name, which is class name and test case name, which is method name. And we have some overall calculated information about this run, how much test running felt, broken and so on. And at the first step, we want to transform a little this format to get rid of useless information and add some required data. Which is useless information? It's here. Actually, it's already calculated value and you don't need them in your model. The model should contains only raw data, so we calculate and process them later. So we remove all the things and replace it with each own test case status. Is it passed, felt or broken? The same as test suite. And timing for every test case, start and stop time. And now it looks like this. You have see test case, the start and stop timestamp and test execution status. You can see the empty tags, steps, attachments and labels. We will use them to enrich our data model. And the first things to do is add some attachments. In our framework, we don't limit you to use some specific attachments. You can use HTML, JSON, plain text, XML images or even video, wherever you want. And as soon as you can attach HTML, you can actually create the JavaScript application right inside your report. For example, you can write JavaScript with a button to restart this failure test. Or you can add comment section and wherever you want, actually. The second step is the steps itself. Who knows what the steps is? Who knows what the steps approach? Nobody? Can you compare users here? Yeah, some kind of this. Actually, the steps is a simple user action. For example, click button or type text and so on. And when you write your tests in the steps terms, you get the simple and clear test scenario, which you can read, understand and write really easily. And of course, you want to see the nested steps because when you have the steps changed, for example, login step consists of three steps, type login, type password and click button, you want to hide it when everything goes well. And we added each step his own status so we can detect when the problem occurs, exactly the step when it appears, and each step can have his own attachments. And the last part of data model is labels. Who knows what is this? Storyboard, yes, right. It's a agile board with features and stories on it. And of course, we added the features and stories to this framework so you can annotate your test and then track how your test covers your business requirements. Great things. So, all in all, we start with standard unit format, which provided by all default unit frameworks, add some raw data during the execution, then you can add some attachments, then you can introduce steps, add labels, and now it gets all your model. Okay, we get the model, but how we get the data for this model? We get it with adapter. Actually, every adapter consists of two parts. The first one is a language API for every specific language. It's a bunch of methods to fire the different test events such as test start, test stop, or you should make an attachment, and so on. And using this API, it's rather simple to create specific framework adapters. As I mentioned before, we use Java, Python, and JavaScript in our automation. So we started with just three adapters. It was GUnit, Karma, and PyUnit. And pretty fast, we get a lot of these adapters implemented. You can see we have a .NET integration with any unit adapter. We have a PHP unit adapter, Scala, Codeception, I don't even know what is this, AirSpec, and even Cucumber. So it's really easy to create the adapter. Okay, now we have an adapter and we have our model filled with data. Let's get the wrapper itself. As I mentioned before, in our data model, we have only raw data, so we shoot to process them at first. So the first step is processing this data to get some statistics, calculated values, and so on. And at the same time, we transform this XML into the lightweight JSON, which will be used later. And the second step, we use AngularJS to create the beautiful Web2.0 HTML report. You can see on the demo link I gave you. And this step can be made in a bunch of ways. We have a command line tool standalone. We have a Maven plugin integration, and we have plugins for CI tools, such as Jenkins, Team City, and Bumble. So you can use your tool. So, allure architecture, we start with Xunit data. Then we, with the help of adapter, provide some additional data to get the allure model, which then process it with data generator to transform this XML into JSON. And in the end, we create the wrapper. And the beautiful things about architecture is you can replace every part of this chain, except maybe the model, because everything based on the model. If your framework didn't provide the standard Xunit output, you can make it yourself. If you don't like the report face we created, you can create your own report face based on the data we provided. And now I can show you the demo, some allure in action. Here's a simple Java project built with Maven. As you can see, it's only Xunit and Selenium in dependencies, and Maven Shurefire plugin to run test, and Shurefire report plugin to generate the default Shurefire report. The test itself is pretty simple. We get the driver at first, then we open search page, find element type search text we want to find, then we get the results by the reference provided, and assert that the result is not zero, and we get the expected text. Let's make it work with allure. As you can see, the code didn't change, but there are a few changes in the POM XML. We added allure.genius adapter to dependencies, and there is a configuration section for Maven Shurefire plugin. You can find everything on project website, so don't worry. And we replace it Shurefire plugin with allure.maven plugin. Actually, you can stop here because that's everything you need to get started. I told you before about easy integration. This is an easy integration. You just change a few lines of your configuration, not the code even, just configuration, and you get the working report. Of course, you will not get all the features, but you will get already much more than basic Shurefire report. Okay, let's move on. To get masked out of allure, let's modify our code and divide our test in steps. As I told you, the step is a simple user action, so there are three steps we'll be here. At first, we'll open the page, then we will type text and click search button, and in the end, we want to assert that we see the expected results in the search. The steps itself is a very interesting and very useful, powerful solution to organize your test. As you can see, I defined the steps right here, but it's really no matter. I hope you don't write the code like this and you're using page object pattern, so you can annotate directly in your page object. Annotate the page object methods with the steps and everything will work as a charm. Okay, some step generalization here to make reusable, nothing unusual. The next step is to add some features and stories to link with our business requirements. The story will be search and a feature simple search, simple text search. You can define the few features or a few stories for every test cluster method. And in the end, we want to add some attachment. You never test this screenshot usually, but we can return the byte array here, so no matter what type of attachment it will be, you'll take it is a standard way for driver. And now we should call our screenshot method after the test execution. Not the best way to do it. Actually in the unit, we have a rule mechanism to do such things and let's run it. I call maven commands clean for clean up on all artifacts test to actually run the test. You can see the test execution here. We get Firefox, type in text and get the result. Then I run the site to build the report itself with a maven report builder and run the JT server local to quickly find the report on local host. And that's it. We have an overview tab, which is for entry point for everyone. Then we have a defects tab. We have no defects here. Great job. I can grab some beer. But when you see the fails here, they are stacked by error message. So it's pretty simple to find out what's going on if a lot of error happens. Then we have a XUnit section with all our steps defined here and attach we created. Pretty simple and clear. Then behavior tab with the features and stories we define here in our test. Some charts for managers, they like charts and timeline with the test execution. And that's all. Thank you. There are our project website where you can find the code itself, documentation, examples and everything you want and my contacts Twitter and email. Thank you. It depends on what you want. You can't make it for the whole test suite or the whole test case or even for one single step. Not every step, but maybe when you made assertions to see the error happened. Yes, you will see it folded, but you can unfold it in a report. We should, sorry. From which framework? TestNG, default TestNG framework. It didn't give you the attachments, the steps. You will see only the, can you repeat please? Can you repeat this? Ah, okay. Okay. The question is, can you see the history of the test execution in this report? The answer is no because it's not the reporting feature. The storage is a different story. And answering the question about what is different with TestNG. This framework is language independent. And TestNG is only Java.