 Welcome to the session one end-to-end test code as first class citizen by Abhijeet Biker. We're glad you could join us today. Also, we would like to thank Apple Tools for sponsoring this session. And without any further delays, Abhijeet, the stage is all yours. Thank you. Thank you Lavanya. Let me share my screen. Hello everyone. I hope all of you are doing well in the current pandemic situation. It sure has been a tough time for all of us everywhere, but I believe we'll soon come out of it and get back to a normal routine. Hey, did you all enjoy yesterday's Selenium Conf talks? I sure did. What a fantastic day it was. I'm sure today's set of talks are going to be brilliant as well. So today I'm going to talk to you about treating your end-to-end tests as first class citizens. Thank you all for joining the session and I hope you find it useful. A little introduction about myself. I am Abhijeet Biker. I'm originally from the city of Pune, state of Maharashtra in India. I've been testing software and writing test code since eight years now. Selenium and APM have been my friends all this time and I can't thank the projects enough for being an important part of my career. I currently work with Carousel as a software, senior software engineer in test in Singapore. I'm also a co-organiser of a testing community called Takila. It's a thriving and active community for all testers in Singapore. Recently, I also started working on a small project called How They Test. The project aims to curate and collect public resources by different tech companies on their testing practices and quality culture. The project is available on GitHub. Do check it out and feel free to contribute to it. It's time for a quick poll. I want to know if you are currently involved with active development, maintenance and enhancement of an in-house Selenium or APM-based end-to-end test framework and are also involved with automating tests. 81% of all the participants are saying that yes, they are involved with developing, maintaining and enhancing an in-house Selenium or APM-based test framework. Since majority of us are building or using frameworks and automating end-to-end tests, let's try to understand what value we want to derive out of our end-to-end tests. What do we want to achieve by automating our end-to-end tests? Some of the main goals or values we want to derive out of our functional automated tests are, we want our tests to detect regressions. We want quick feedback on the state of our products by running these tests frequently. And with this frequent feedback, automated tests help us achieve faster delivery cycles too, as the feedback loop with automated tests is much shorter than a human conducting those tests. Automated tests also give us the flexibility we want to perform checks at any time of the day from anywhere and any number of times. We also achieve reduction of costs in terms of the time it takes to test an entire regression suite with human intervention or the time it takes to fix a bug that was missed because of human error. Also, with automated tests, we are able to perform reliable testing. So imagine a test engineering team that is working on an in-house test automation framework and also contributing to end-to-end automated tests. At some point of time in the journey, the team starts having conversations like this. Today's regression run for Android didn't execute at all. Oh, God. Tests are failing locally. Did we change something in the framework? My tests were working until yesterday. Today they are not. Slack notifications are not being sent for test transfer. What happened? Someone accidentally checked in sensitive values in test code. Why? The branch name in the regression Jenkins job has been changed. Who did that? Signup tests are failing due to null pointer exception. What does this new method in the framework do? With such issues, test engineers are spending more time fixing issues in automated tests that could have been avoided in the first place. Does this feel familiar to you? Have you experienced such situations too? Aren't these problems coming in the way of achieving value out of our end-to-end tests? So how do we come out of this situation? We do so by ensuring quality of the test framework as well as the test code. But what makes a test framework a global quality framework? How do we define quality in this context? We can define quality here in terms of stability. So stability is in terms of stability of your tests. For example, a test is unstable if you use a poor locator strategy or a test is unstable if the feature behavior is indeterministic. Reliability. No matter how many tests are run at a given time, how many times the tests are run and how long the tests are run, the test execution should behave the way we wanted to when the test run is triggered. And lastly, ease of use. So a good quality test framework and a test code is easy to read, maintain, and extend as well. This diagram implies the effort or cost involved to fix issues found in your test framework or test code when it is found in the latest stage of framework test development. So an issue found while you are designing your test framework or test code is less costly to fix as compared to finding it when your tests are actually run for larger group of stakeholders. So when they are found in the latest stages, it can lead to a lack of trust in the automated tests by stakeholders. If you notice the impact is similar to the cost of bugs that are found in your production code, that is the product that you're testing. What does this mean? This means that in order to test, in order to ensure quality of your framework and test code, you should treat your test code as fair and as equal as you treat your production code. Before I share more about the journey of treating test code as production code at Carousel, I would like to give an overview of the end-to-end test automation practice at Carousel for better context of the practices I'll share. So for end-to-end test automation at Carousel, the development environment that we use is Eclipse and IntelliJ and the programming language that we use for writing automated tests is Java. We use GitHub for version control of our test framework and test code. We use Jenkins for continuous integration. We use Maven for dependency management and our automation libraries are Selenium and APM. We also use a test case management system called as TM4J. So TM4J is also used for maintaining the test suite of all the regression cases as well as when our automation test cases are run, the test cycles are created automatically from our daily regression and they are updated with the status of the test cases. As far as devices are concerned, we use devices from BrowserStack and we also have an in-house device lab called as CaroFarm. Our execution environment uses Docker for running the tests and we also make use of Slack for pushing notifications of the test runs and test results so that whenever any team wants to get results of the test runs, they can get it on Slack. We also use some form of storage, database as well as file storage for application builds, for logs, for videos and test accounts as well. What kind of tests do we run in end-to-end? So there are three different types of tests that we run. Fast feedback tests, these are the tests that we run for every pull request created by the Android and iOS engineers. These are like the most critical scenarios that need to be tested whenever any changes are made in the UI code. Daily regression runs are the regression test cases which run nightly for the nightly builds that are produced from the iOS and Android apps. And sanity runs are the subset of the regression cases which are usually run on the day of the release. So let us now look at some principles and practices we have been working on in our journey of end-to-end test automation. They are applied at the code level as well as at the process and people level. So first and foremost, all changes for test code and framework code go on the dedicated branch. We do not push any of the test code or framework code directly to master branch. Now this might be an obvious principle to a lot of us, especially for test engineers coming from a development background. But I have often seen this being ignored by many teams just because it's test code. But this should not be the case because test code is also important as production code. Now the next principle that we adopt is automated checks on pull request. So whenever a test engineer contributes to automated tests or framework code, we enable them to run some checks on their pull request. So any change that goes into the test code or framework code goes into the form of pull request on their dedicated branches as you saw in the previous slide. So in this pull request, the kind of checks that we perform are we check whether the code is formatted, we check whether the code compiles do unit or integration tests pass, does code pass SonacloudPoliticate and have the reviewers approve your PR. So if all of these checks are good to go, then the test engineer is basically allowed to merge the changes into the master branch. If any of these checks fail, then the pull request is blocked until it's fixed by the test engineer. Now we look at all these various checks one by one. First one is code formatting. So for code formatting, we follow a particular code style which is basically the Google Java style guide. Google Java style guide comes along with a Mavin plugin which we have integrated with as part of our build pipeline. So whenever a pull request is created, a build starts executing and as part of that build, the FMT Mavin plugin is also executed. So what this plugin does is that it will check and check through your entire set of changes in the code and it will verify it against the Google Java style guide. If any of the code does not comply with the code style, then it will highlight what files are not complying and you can fix them with a simple command like Mavin FMT colon format. So this is added as part of our pull request checks. The next check is static code analysis. So we are using Sonacloud as the platform for static code analysis. So with the help of static code analysis, Sonacloud, we are able to perform analysis on our test code as well as framework code and within the pull request itself, Sonacloud will highlight to you what all issues have been found in your code. So as you see in the image below, there is one bug, there are eight codes merits and there is 1.2% of duplication. So this is a kind of an automated check that gets performed for every change that happens on the test code and framework. Now we'll take a look, a detailed look into the Sonacloud checks. So this is a snapshot of Sonacloud platform for a particular test code repository. So one of the two of the most important points that we look into as far as the static code analysis is concerned is reliability and maintainability. So reliability is a criteria in Sonacloud which is checked against the possibility of code behaving differently than it was intended to. For example, a possibility of some code returning null pointer exception if not handled properly. Using two string method on an array instance instead of using arrays.toString. For maintainability, it gives us the amount of code smell that is there in your test code. Example of maintainability issues are using deprecated classes, defining and throwing a dedicated exception instead of generic one, avoiding catching of null pointer exception and then refactoring code to not include more than like three ifs or four or while switch try statements. You also track coverage. So coverage is basically the code coverage of framework code based on the unit tests and integration tests we write in it. Last year, Marit who is a well-known expert in the testing community and who also constantly shares great thoughts on testing and test automation tweeted, why isn't anyone in conferences talking about unit tests in your test automation to test some of the essential assumptions that must be true for your automation to still work alerting you on them breaking. That is one of the talks that I want to hear. So as I mentioned about code coverage in the previous slide, right? So that code coverage is basically for the unit or integration tests in our test automation framework. But what do we mean by unit and integration tests in test automation framework? Like what Marit said, in order to make sure that the basic assumptions of your functioning, proper functioning of the test automation framework is true, we can write unit or integration tests for certain components of your test code or test framework. What kind of components they can be? They can be code related to your driver management related to your test data handlers, configuration handlers, cloud service handlers, test case management system handlers, reporting handlers and all other classes that support correct functioning of the test framework. So over here, we need to be careful regarding what components we are writing the unit or integration tests for. Like if you have page objects and test classes, feature files and test specs in your test code, you really do not need to write unit or integration tests for them. You will write for the other core components of your framework. Now this is an example of what kind of unit tests you can write. So consider that there is there is a test suite core module of your framework which has certain services like login service, registration service and user service. So you will write unit or integration tests for these services. These services are this these classes are critical to the correct functioning of your test automation framework. So you'll write unit or integration tests here. Now going into more detail, let us look at what a unit test looks like for our test automation frameworks component. So I have written a test for login service. So what the login service will do is that it will make an API call to the login login API of your product. It will fetch the token and it will store the token in the hash map so that whenever you make an API call again you do not need to look you do not need to fetch a new token. It will simply fetch the token from the hash map itself. So this is the core logic of the login service. So I will write a unit test to ensure that this core logic in the login service is working fine. Similarly, there is a test for TM4j. So TM4j as I mentioned before is the test case management platform that we adopted. So whenever we want to push the results of test cases to the test case management platform we have written a utility in our test framework to do that. Now to ensure that the logic of this utility is working fine, we have written test cases. By the way, do you know that the selenium project has unit tests and integration tests too? So if you go on Github and check out the selenium project and go to the Java client, I mean this Java is just the example that I'm giving. I'm sure there must be unit and integration tests for other language bindings as well. But what I want to say over here is that your test framework or your test utility or your test tool is also another product that someone is going to use. So that means it's a good practice to add unit or integration tests for your test framework as well. Now coming to code reviews. So how many of us over here perform code reviews on your test automation code? Sam Connelly on Twitter tweeted once last year do you do code reviews on your test automation code and a lot of us responded that yes we perform code reviews. One of the practices that we have embedded in our test automation development is that for both the framework and the test suite repositories that we have, we have mandated two reviewers for all the pull requests. That means that even if like let's say I have created a pull request for my code changes in the test framework I will require two reviewers to review my code. Now both the reviewers need to approve my code only then I'll be able to go ahead with encouraging it. Code reviews are also quite important in terms of getting feedback from other team members in your test engineering team and this is also one of the practices that we adopted. Now another tiny detail which probably might not be of great significance but also it is quite important is pull request checklist. So what we do is that we have created a pull request template so whenever any test engineer creates a pull request they automatically get a template displayed in front of them on github which shows the checklist in terms of what all things need to be taken care of when you are creating your pull request. So things like understanding what kind of change this is whether this is a refactoring or whether this is a new test or whether this is a fixed on an existing test or if it's a documentation update. Also as I mentioned the due diligence part so we have a checklist below where we kind of it's basically kind of a checklist for ourselves where we'll tick mark those points to ensure that the due diligence has been done. So for example past locally android, past locally iOS, past locally web. Now another change that we have done is earlier we had the test framework and test code all of it into a single repository. At some point of time in our journey of into a test automation we realized that it would be a better thing to do if we split the test framework and test code into two different projects. So this was one of the things that we did the benefit that we gained out of this practice was that we were able to always refer to a stable version of test framework while anyone made any unstable changes to the test framework. So if you see over here the test framework we created the test framework in such a way that it can be included as a dependency into the test code. So whenever you want to run the tests the test will use the most stable version of the framework so any changes which are made into the test framework by anyone in the test engineering team the changes will not be included in the stable release unless and until it has gone through certain checks. The next practice that we adopted is moving from freestyle projects on Jenkins continuous integration to pipeline code. So pipeline as a code basically allows you to create your continuous integration scripts or continuous integration job in the form of Jenkins file or scripts. So it's basically nothing but code and code can be version controlled. So what we have done is that for all the continuous integration jobs that we have created on Jenkins all of them are version controlled in the form of Jenkins file. So whenever any changes are made to the CI jobs these changes go through the version control so that we come to know what changes are being made and these changes also go through review. Another aspect of quality over here is test environment for test runs. Now what does it mean? It means what we try to do was we isolated the real test runs from debug test runs. That means we had separate CI jobs for testing our changes to framework or test suite code. We also had separate TM4J test cycle for testing the integration of our framework with TM4J. So the actual TM4J test cycles do not get impacted. We also had a separate DB instance to test database integrations with our framework. So the whole intent of this effort was to not disturb the production tests no matter what. Now coming to the faster analysis of issues when things break. So in order to understand what went wrong whenever test runs fail we have made use of notifications in the form of Slack notifications so whenever tests are run on continuous integration after the tests are finished we push the results onto Slack channels. So as I mentioned before in the overview we use Slack as a communication platform so most of our teams communicate over there and it made good sense for us to actually push the test results directly to Slack. Now this is a snapshot of what you see is basically TM4J. So TM4J is a test case management platform and what we do is that whenever we execute tests from continuous integration we also upload the Java logs from the test execution and also the video records of the devices. Now coming to code coding principles apart from all the changes that we did in the code we also follow certain coding principles. Now this slide is a deja vu slide if you attended Gaurav stock yesterday on principles and patterns for test automation framework. Gaurav did a fantastic job on explaining these principles and patterns so I won't be going into a lot of details over here so we quickly go through them. So one of the things that we need to do is we need to keep the test short and atomic. Also need to take care of separation of concerns or responsibilities so since we make use of a cucumber JBM for defining our test specs we split the test automation code into feature files step classes, step classes call page objects and set classes are responsible for performing assertions and page objects are the layer which interact with the UI. Also we avoid repetition of code using dry principles and abstraction wherever necessary. Java docs is also quite important for public methods in framework code. So this is quite important from framework point of view as well where in anyone who is referring to the code in your framework the public methods in your code should also have Java docs. Efficient locators I mean we all talk about locators quite frequently and the practice is quite well known across the industry but let's say for example if there is a need for adopting XPath for any of the use cases of finding elements and that XPath could be replaced by a better alternative later as well then what we do is that we create a backlog ticket for the Android RIS engineers so that this is basically considered as a tech debt in your code and once the once the ticket has been created in the backlog for the engineers the engineers will be responsible for fixing the UI so that we have better selectors and locators also avoiding shared state between tests including use of static methods so in our case what we did was we tried as much as possible to eliminate the use of static in our code we also employed the use of spring dependency injection framework in our test automation code that helped us manage the instances of various classes using a dependency injection container another aspect of book coding principles also is to knowing when to catch exceptions and when to let them fail tests also being smart about the waiting mechanisms using a logging framework instead of system.org. so it's not really a good choice to actually try logging certain statements in your test code using system.org or printer and instead of that what you could do is you can use a logging framework for example in our framework we use log4j and SL4j not checking in sensitive data in the code repository so one of the discussions that I mentioned before happening in the test engineering team was like accidentally checking in certain sensitive data or sensitive tokens inside your test code which can cause security issues so you can use you can make use of some repository or a tool called as get secrets it helps you to avoid checking in certain sensitive data into your test code even before you post your commit to the repository also removing that code is a good practice to maintain lastly using voice code rule so what voice code rule means is that you should never leave like if you are working on something and you find an opportunity to fix it immediately then then go and fix it so leave your code better than you found it apart from the code changes and the design coding and design patterns and principles team collaboration communication and planning is also very important so the things that we do in order to ensure this is that we plan strategic changes to framework well rfcs are basically request for comments request for comments or documents which ensure that whatever important changes that you are going to make as part of your test automation strategy they are well planned they are well documented and many people or stakeholders have taken a look at it the questions have been addressed by the stakeholders and all possible conditions have been thought of before actually implementing the solution thinking about possible failures when designing solutions for test framework and integration prioritizing high impact tests for end to end UI coverage conducting sprint review sessions where any significant work done is demonstrated and made open to questions and discussions so we also perform sprint review sessions where any changes that have been done like for example framework level changes or any of the important tooling for test automation we demonstrate as part of the sprint review sessions and also encourage asking questions and discussions pair programming and pair installing sessions are also important for us where in let's say I want to implement certain complex complex solution as part of the test automation framework I will pair program along with another test engineer in my team also encouraging an open mindset is important when it comes to asking questions and giving feedback on code so this is important because whenever you are thinking of implementing something as part of your framework strategy or test code it's important to have everyone in your team on the same page in terms of the reason behind it so it's important to also encourage an open mindset where you allow your team members to also ask all the possible questions like why are we doing this what is the benefit of adopting the solution can there be a better or a simpler solution to this problem rather than what we are proposing last one is creating zero tickets at any time across tech debt so as I mentioned before tech debt is something which is at some point of time it's always going to be there but addressing that tech debt is also important instead of ignoring it when there are situations where you have to implement certain solution such as automation framework or test code and you want to implement it in a particular logic which is not the most efficient way of doing it then we ensure that this is tracked as a future improvement or enhancement in the form of backlog tickets so the key takeaways and learnings from this is that right unit or integration tests for core components and integrations that make and break your framework and test code create build pipeline for your test framework and test code using CI automate checks at PR level to avoid issues creeping into your daily test runs that matter conduct code reviews religiously adopt good coding principles and design patterns to ensure making changes in code is easy and lastly establish a practice of open and frequent communication in the team driven by retrospection and continuous improvement so that brings to the end of my topic and I hope this was a good learning for all of you and I would like to thank Selenium Conf for giving me this opportunity to talk about this topic and sharing the experience that we gained out of implementing this implementing various solutions and principles in our end to end strategy. Thank you this was a great session there are quite a few questions and we do have time to take a couple of them so maybe I could start one question is about can you speak about how automation is conducted in project delivery model is it in sprint automation and do you tackle B0 or B1 cases first and the remaining is like a constantly running stream so currently the way we tackle it is that we basically have some backlog of tests that we currently have a backlog of tests that we want to automate so these are as the question is mentioned these are the B0 and B1 tests that we are also in a journey to automate apart from that the question about in sprint so we are not currently implementing the practice of automating the features in sprint what we do is that we work along with the delivery teams and understand what features need to be automated in terms of the test cases and create a backlog out of those items and accordingly plan to automate them great one more question about framework code and test code so is there any way to verify that the framework changes actually break tests which depend on it what does the process really look like did you hear me worry yeah I am just trying to understand the question again can you repeat once again yeah splitting framework and test code in separate repository is great but how do you verify that any changes in the framework does not break tests which depend on it like what does the process overall look like okay so basically whenever any changes are whenever any changes are made into the framework code we also have a continuous integration for the framework code as well so as I mentioned before after we split the framework code and the test code into two separate projects we had certain checks and unit tests for we had certain checks and unit tests for the framework repository also changes that were made into the framework the continuous integration or the pull request build pipeline will also ensure that the functionality of the components or the core features of the test framework do not break yeah I think that answers the question thank you there's one more very highly voted question about duplications so Amritraj here says that test code is expected to have some duplications but how do you manage that and does sonar keep sonar cloud help in in this case yeah so as I mentioned right one of the one of the two of the main components or criteria that we use sonar cloud or we focus our efforts on when it comes to static analysis using sonar cloud is maintainability and the other one is reliability so as the question mentioned right it's correct that test code at some point of time can have duplications which is why that is a criteria which we do not strictly enforce but still we try to strive for avoiding duplications wherever necessary yeah that's nice also any tips on getting developers to contribute to writing functional tests okay so this is a good question we have been working with our client engineers basically the android and ios engineers to try to establish a practice where functional tests can also be contributed to by the devs themselves right now we are in a phase of transition where we are getting the developers to be acquainted with what automated testing and functional testing is in the first place because what we have seen is that often they do not have a significant experience and knowledge of tools like selenium or apm so the first tip that we are actually trying to achieve is that we are trying to train them on functional test automation using apm and selenium and after that we will try to engage them and get their involvement in writing functional tests okay great is any part of your framework code open source no okay i think it's not open source right now but i mean we might consider late at some point of time okay this one more very highly rated question about how many mobile builds are integrated with your test framework i believe it's more to do with how many times they run mobile builds in the sense the application builds yes i think so i would believe that it is more to do with how many times these tests are run so as i mentioned in the slides before in the carousel overview there are three different types of tests that run one is frequent fast feedback tests the other one is regression and the third one is sanity so for fast feedback tests these tests are run as many times as the pool requests are created by the ios and android engineers so whenever an android engineer let's say for example is has created a pool request for a new feature change in the android code we create a build out of that feature runs in android code we create the apk file and then we run certain smoke tests against the apk file using apm same applies for ios as well so this happens as frequently as the number of pool requests that have been created by the client engineers now for the regression tests they happen nightly that means that every night a build is created from the master branch of the ios and android code and against this build we run the set of regression tests I hope that answers your question I think most of the questions are covered now there's just one more thing which is about whether there is a conflict between what Narayan mentioned earlier because he said that maybe it might not be possible to treat your test code exactly as your production code so there's somebody who wants to know what your views are about this topic sorry can you repeat again someone here wants to know that like in the previous talk Narayan mentioned that it might not be possible to treat test code as production code always so what are your views on that so as I mentioned it might not be possible to treat the test code almost as equal to the production code at least let's try to bring it to the level of production code so what I meant what when anyone says that you treat test code as production code it just means that the principles that you are trying to apply for production code for example writing creating or maintaining your test code separate feature branches not merging them directly into your master branch then having some checks on your poll requests so that issues which are very obvious they are found right when the poll request is created then maybe one of the observations are the points which I also got when I conducted the unit testing session long back for the test chat community was that it is always possible to add unit or integration tests for test framework because there is not enough time for that for test engineers agree but then you can always plan for it at some point of time like what I mentioned is tech debt so tech debt is something which you always plan for and as a test engineering team even if you are working with your even if you are working as a dedicated tester in a team or if you are working as a separate you can always plan for fixing the tech debt in the future so I mean I did not get a chance to look at Narayan's talk so I do not have a deeper context but I just feel that we can try as much as possible to treat our test code as production code I think that is fair and I think that is what he meant as well Abhijeet wonderful session thank you really thank you very much